00:00:00.000 Started by upstream project "autotest-per-patch" build number 132852 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.101 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.103 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.240 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.240 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.054 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.065 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.076 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.076 > git config core.sparsecheckout # timeout=10 00:00:05.087 > git read-tree -mu HEAD # timeout=10 00:00:05.102 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.119 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.120 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.206 [Pipeline] Start of Pipeline 00:00:05.217 [Pipeline] library 00:00:05.218 Loading library shm_lib@master 00:00:05.219 Library shm_lib@master is cached. Copying from home. 00:00:05.233 [Pipeline] node 00:00:05.257 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.259 [Pipeline] { 00:00:05.266 [Pipeline] catchError 00:00:05.268 [Pipeline] { 00:00:05.278 [Pipeline] wrap 00:00:05.285 [Pipeline] { 00:00:05.290 [Pipeline] stage 00:00:05.291 [Pipeline] { (Prologue) 00:00:05.493 [Pipeline] sh 00:00:05.776 + logger -p user.info -t JENKINS-CI 00:00:05.793 [Pipeline] echo 00:00:05.797 Node: WFP20 00:00:05.804 [Pipeline] sh 00:00:06.103 [Pipeline] setCustomBuildProperty 00:00:06.116 [Pipeline] echo 00:00:06.118 Cleanup processes 00:00:06.124 [Pipeline] sh 00:00:06.408 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.408 338843 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.424 [Pipeline] sh 00:00:06.706 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.706 ++ grep -v 'sudo pgrep' 00:00:06.706 ++ awk '{print $1}' 00:00:06.706 + sudo kill -9 00:00:06.706 + true 00:00:06.724 [Pipeline] cleanWs 00:00:06.735 [WS-CLEANUP] Deleting project workspace... 00:00:06.735 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.742 [WS-CLEANUP] done 00:00:06.746 [Pipeline] setCustomBuildProperty 00:00:06.758 [Pipeline] sh 00:00:07.041 + sudo git config --global --replace-all safe.directory '*' 00:00:07.116 [Pipeline] httpRequest 00:00:07.490 [Pipeline] echo 00:00:07.492 Sorcerer 10.211.164.20 is alive 00:00:07.502 [Pipeline] retry 00:00:07.505 [Pipeline] { 00:00:07.522 [Pipeline] httpRequest 00:00:07.526 HttpMethod: GET 00:00:07.527 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.528 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.531 Response Code: HTTP/1.1 200 OK 00:00:07.531 Success: Status code 200 is in the accepted range: 200,404 00:00:07.532 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.688 [Pipeline] } 00:00:08.704 [Pipeline] // retry 00:00:08.711 [Pipeline] sh 00:00:08.996 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.012 [Pipeline] httpRequest 00:00:09.378 [Pipeline] echo 00:00:09.380 Sorcerer 10.211.164.20 is alive 00:00:09.391 [Pipeline] retry 00:00:09.393 [Pipeline] { 00:00:09.408 [Pipeline] httpRequest 00:00:09.413 HttpMethod: GET 00:00:09.413 URL: http://10.211.164.20/packages/spdk_44c641464f4b534faafcb7dc9ca966a07efd1773.tar.gz 00:00:09.414 Sending request to url: http://10.211.164.20/packages/spdk_44c641464f4b534faafcb7dc9ca966a07efd1773.tar.gz 00:00:09.430 Response Code: HTTP/1.1 200 OK 00:00:09.431 Success: Status code 200 is in the accepted range: 200,404 00:00:09.431 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_44c641464f4b534faafcb7dc9ca966a07efd1773.tar.gz 00:01:03.125 [Pipeline] } 00:01:03.142 [Pipeline] // retry 00:01:03.149 [Pipeline] sh 00:01:03.435 + tar --no-same-owner -xf spdk_44c641464f4b534faafcb7dc9ca966a07efd1773.tar.gz 00:01:05.983 [Pipeline] sh 00:01:06.269 + git -C spdk log --oneline -n5 00:01:06.269 44c641464 nvmf: added support for add/delete host wrt referral 00:01:06.269 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:06.269 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:06.269 66289a6db build: use VERSION file for storing version 00:01:06.269 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:06.279 [Pipeline] } 00:01:06.292 [Pipeline] // stage 00:01:06.301 [Pipeline] stage 00:01:06.303 [Pipeline] { (Prepare) 00:01:06.318 [Pipeline] writeFile 00:01:06.334 [Pipeline] sh 00:01:06.618 + logger -p user.info -t JENKINS-CI 00:01:06.630 [Pipeline] sh 00:01:06.915 + logger -p user.info -t JENKINS-CI 00:01:06.927 [Pipeline] sh 00:01:07.212 + cat autorun-spdk.conf 00:01:07.212 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.212 SPDK_TEST_FUZZER_SHORT=1 00:01:07.212 SPDK_TEST_FUZZER=1 00:01:07.212 SPDK_TEST_SETUP=1 00:01:07.212 SPDK_RUN_UBSAN=1 00:01:07.220 RUN_NIGHTLY=0 00:01:07.224 [Pipeline] readFile 00:01:07.247 [Pipeline] withEnv 00:01:07.249 [Pipeline] { 00:01:07.262 [Pipeline] sh 00:01:07.547 + set -ex 00:01:07.547 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:07.547 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:07.547 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.547 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:07.547 ++ SPDK_TEST_FUZZER=1 00:01:07.547 ++ SPDK_TEST_SETUP=1 00:01:07.547 ++ SPDK_RUN_UBSAN=1 00:01:07.547 ++ RUN_NIGHTLY=0 00:01:07.547 + case $SPDK_TEST_NVMF_NICS in 00:01:07.547 + DRIVERS= 00:01:07.547 + [[ -n '' ]] 00:01:07.547 + exit 0 00:01:07.557 [Pipeline] } 00:01:07.571 [Pipeline] // withEnv 00:01:07.575 [Pipeline] } 00:01:07.588 [Pipeline] // stage 00:01:07.596 [Pipeline] catchError 00:01:07.598 [Pipeline] { 00:01:07.610 [Pipeline] timeout 00:01:07.610 Timeout set to expire in 30 min 00:01:07.612 [Pipeline] { 00:01:07.625 [Pipeline] stage 00:01:07.627 [Pipeline] { (Tests) 00:01:07.641 [Pipeline] sh 00:01:07.927 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.927 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.927 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.927 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:07.927 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:07.927 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.927 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:07.927 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.927 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.927 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.927 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:07.927 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.927 + source /etc/os-release 00:01:07.927 ++ NAME='Fedora Linux' 00:01:07.927 ++ VERSION='39 (Cloud Edition)' 00:01:07.927 ++ ID=fedora 00:01:07.927 ++ VERSION_ID=39 00:01:07.927 ++ VERSION_CODENAME= 00:01:07.927 ++ PLATFORM_ID=platform:f39 00:01:07.927 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:07.927 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.927 ++ LOGO=fedora-logo-icon 00:01:07.927 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:07.927 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.927 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:07.927 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.927 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.927 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.927 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:07.927 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.927 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:07.927 ++ SUPPORT_END=2024-11-12 00:01:07.927 ++ VARIANT='Cloud Edition' 00:01:07.927 ++ VARIANT_ID=cloud 00:01:07.927 + uname -a 00:01:07.927 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:07.927 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:11.217 Hugepages 00:01:11.217 node hugesize free / total 00:01:11.217 node0 1048576kB 0 / 0 00:01:11.217 node0 2048kB 0 / 0 00:01:11.217 node1 1048576kB 0 / 0 00:01:11.217 node1 2048kB 0 / 0 00:01:11.217 00:01:11.217 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:11.217 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:11.217 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:11.217 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:11.217 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:11.217 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:11.217 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:11.217 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:11.217 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:11.218 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:11.218 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:11.218 + rm -f /tmp/spdk-ld-path 00:01:11.218 + source autorun-spdk.conf 00:01:11.218 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.218 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:11.218 ++ SPDK_TEST_FUZZER=1 00:01:11.218 ++ SPDK_TEST_SETUP=1 00:01:11.218 ++ SPDK_RUN_UBSAN=1 00:01:11.218 ++ RUN_NIGHTLY=0 00:01:11.218 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:11.218 + [[ -n '' ]] 00:01:11.218 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:11.218 + for M in /var/spdk/build-*-manifest.txt 00:01:11.218 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:11.218 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:11.218 + for M in /var/spdk/build-*-manifest.txt 00:01:11.218 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:11.218 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:11.218 + for M in /var/spdk/build-*-manifest.txt 00:01:11.218 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:11.218 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:11.218 ++ uname 00:01:11.218 + [[ Linux == \L\i\n\u\x ]] 00:01:11.218 + sudo dmesg -T 00:01:11.218 + sudo dmesg --clear 00:01:11.218 + dmesg_pid=340302 00:01:11.218 + [[ Fedora Linux == FreeBSD ]] 00:01:11.218 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.218 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.218 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:11.218 + [[ -x /usr/src/fio-static/fio ]] 00:01:11.218 + export FIO_BIN=/usr/src/fio-static/fio 00:01:11.218 + FIO_BIN=/usr/src/fio-static/fio 00:01:11.218 + sudo dmesg -Tw 00:01:11.218 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:11.218 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:11.218 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:11.218 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.218 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.218 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:11.218 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.218 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.218 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:11.218 10:03:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:11.218 10:03:24 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:11.218 10:03:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.218 10:03:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:01:11.218 10:03:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:01:11.218 10:03:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:01:11.218 10:03:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:11.218 10:03:24 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:11.218 10:03:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:11.218 10:03:24 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:11.478 10:03:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:11.478 10:03:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:11.478 10:03:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:11.478 10:03:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.478 10:03:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.478 10:03:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.478 10:03:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.478 10:03:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.478 10:03:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.478 10:03:24 -- paths/export.sh@5 -- $ export PATH 00:01:11.478 10:03:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.478 10:03:24 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:11.478 10:03:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:11.478 10:03:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733994204.XXXXXX 00:01:11.478 10:03:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733994204.rs11Qd 00:01:11.478 10:03:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:11.478 10:03:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:11.478 10:03:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:11.478 10:03:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.478 10:03:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.478 10:03:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:11.478 10:03:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:11.478 10:03:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.478 10:03:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:11.478 10:03:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:11.478 10:03:24 -- pm/common@17 -- $ local monitor 00:01:11.478 10:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.478 10:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.478 10:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.478 10:03:24 -- pm/common@21 -- $ date +%s 00:01:11.478 10:03:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.478 10:03:24 -- pm/common@21 -- $ date +%s 00:01:11.478 10:03:24 -- pm/common@25 -- $ sleep 1 00:01:11.478 10:03:24 -- pm/common@21 -- $ date +%s 00:01:11.478 10:03:24 -- pm/common@21 -- $ date +%s 00:01:11.478 10:03:24 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994204 00:01:11.478 10:03:24 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994204 00:01:11.478 10:03:24 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994204 00:01:11.478 10:03:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733994204 00:01:11.478 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994204_collect-vmstat.pm.log 00:01:11.478 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994204_collect-cpu-load.pm.log 00:01:11.478 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994204_collect-cpu-temp.pm.log 00:01:11.478 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733994204_collect-bmc-pm.bmc.pm.log 00:01:12.415 10:03:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:12.415 10:03:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.415 10:03:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.415 10:03:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:12.415 10:03:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.415 Thu Dec 12 09:03:25 AM UTC 2024 00:01:12.415 10:03:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.415 v25.01-rc1-2-g44c641464 00:01:12.415 10:03:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.415 10:03:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.415 10:03:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.415 10:03:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:12.415 10:03:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:12.415 10:03:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.415 ************************************ 00:01:12.415 START TEST ubsan 00:01:12.415 ************************************ 00:01:12.415 10:03:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:12.415 using ubsan 00:01:12.415 00:01:12.415 real 0m0.001s 00:01:12.415 user 0m0.000s 00:01:12.415 sys 0m0.001s 00:01:12.415 10:03:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:12.415 10:03:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.415 ************************************ 00:01:12.415 END TEST ubsan 00:01:12.415 ************************************ 00:01:12.415 10:03:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:12.415 10:03:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.415 10:03:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.415 10:03:26 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:12.415 10:03:26 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:12.415 10:03:26 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:12.415 10:03:26 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:12.415 10:03:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:12.415 10:03:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.674 ************************************ 00:01:12.674 START TEST autobuild_llvm_precompile 00:01:12.674 ************************************ 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:12.674 Target: x86_64-redhat-linux-gnu 00:01:12.674 Thread model: posix 00:01:12.674 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:12.674 10:03:26 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:12.934 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:12.934 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:13.193 Using 'verbs' RDMA provider 00:01:29.460 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:44.349 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:44.349 Creating mk/config.mk...done. 00:01:44.349 Creating mk/cc.flags.mk...done. 00:01:44.349 Type 'make' to build. 00:01:44.349 00:01:44.349 real 0m30.232s 00:01:44.349 user 0m13.178s 00:01:44.349 sys 0m16.531s 00:01:44.349 10:03:56 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:44.349 10:03:56 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:44.349 ************************************ 00:01:44.349 END TEST autobuild_llvm_precompile 00:01:44.349 ************************************ 00:01:44.349 10:03:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:44.349 10:03:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:44.349 10:03:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:44.349 10:03:56 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:44.349 10:03:56 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:44.349 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:44.349 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:44.349 Using 'verbs' RDMA provider 00:01:56.563 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:08.775 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:08.776 Creating mk/config.mk...done. 00:02:08.776 Creating mk/cc.flags.mk...done. 00:02:08.776 Type 'make' to build. 00:02:08.776 10:04:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:08.776 10:04:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.776 10:04:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.776 10:04:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.776 ************************************ 00:02:08.776 START TEST make 00:02:08.776 ************************************ 00:02:08.776 10:04:22 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:10.685 The Meson build system 00:02:10.685 Version: 1.5.0 00:02:10.685 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:10.685 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:10.685 Build type: native build 00:02:10.685 Project name: libvfio-user 00:02:10.685 Project version: 0.0.1 00:02:10.685 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:10.685 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:10.685 Host machine cpu family: x86_64 00:02:10.685 Host machine cpu: x86_64 00:02:10.685 Run-time dependency threads found: YES 00:02:10.685 Library dl found: YES 00:02:10.685 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.685 Run-time dependency json-c found: YES 0.17 00:02:10.685 Run-time dependency cmocka found: YES 1.1.7 00:02:10.685 Program pytest-3 found: NO 00:02:10.685 Program flake8 found: NO 00:02:10.685 Program misspell-fixer found: NO 00:02:10.685 Program restructuredtext-lint found: NO 00:02:10.685 Program valgrind found: YES (/usr/bin/valgrind) 00:02:10.685 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.685 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.685 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.685 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:10.685 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:10.685 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:10.685 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:10.685 Build targets in project: 8 00:02:10.685 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:10.685 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:10.685 00:02:10.685 libvfio-user 0.0.1 00:02:10.685 00:02:10.685 User defined options 00:02:10.685 buildtype : debug 00:02:10.685 default_library: static 00:02:10.685 libdir : /usr/local/lib 00:02:10.685 00:02:10.685 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.944 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:11.203 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:11.203 [2/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:11.203 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:11.203 [4/36] Compiling C object samples/null.p/null.c.o 00:02:11.203 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:11.203 [6/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:11.203 [7/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:11.203 [8/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:11.203 [9/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:11.203 [10/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:11.203 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:11.203 [12/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:11.203 [13/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:11.203 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:11.203 [15/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:11.203 [16/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:11.203 [17/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:11.203 [18/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:11.203 [19/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:11.203 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:11.203 [21/36] Compiling C object samples/server.p/server.c.o 00:02:11.203 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:11.203 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:11.203 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:11.203 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:11.203 [26/36] Compiling C object samples/client.p/client.c.o 00:02:11.203 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:11.203 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:11.203 [29/36] Linking static target lib/libvfio-user.a 00:02:11.203 [30/36] Linking target samples/client 00:02:11.203 [31/36] Linking target test/unit_tests 00:02:11.203 [32/36] Linking target samples/server 00:02:11.203 [33/36] Linking target samples/lspci 00:02:11.203 [34/36] Linking target samples/shadow_ioeventfd_server 00:02:11.203 [35/36] Linking target samples/gpio-pci-idio-16 00:02:11.203 [36/36] Linking target samples/null 00:02:11.203 INFO: autodetecting backend as ninja 00:02:11.203 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.462 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:11.723 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:11.723 ninja: no work to do. 00:02:17.000 The Meson build system 00:02:17.000 Version: 1.5.0 00:02:17.000 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:17.000 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:17.000 Build type: native build 00:02:17.000 Program cat found: YES (/usr/bin/cat) 00:02:17.000 Project name: DPDK 00:02:17.000 Project version: 24.03.0 00:02:17.000 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:17.000 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:17.000 Host machine cpu family: x86_64 00:02:17.000 Host machine cpu: x86_64 00:02:17.000 Message: ## Building in Developer Mode ## 00:02:17.000 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.000 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:17.000 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.000 Program python3 found: YES (/usr/bin/python3) 00:02:17.000 Program cat found: YES (/usr/bin/cat) 00:02:17.000 Compiler for C supports arguments -march=native: YES 00:02:17.000 Checking for size of "void *" : 8 00:02:17.000 Checking for size of "void *" : 8 (cached) 00:02:17.000 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:17.000 Library m found: YES 00:02:17.000 Library numa found: YES 00:02:17.000 Has header "numaif.h" : YES 00:02:17.000 Library fdt found: NO 00:02:17.000 Library execinfo found: NO 00:02:17.000 Has header "execinfo.h" : YES 00:02:17.000 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.000 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.000 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.000 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.000 Run-time dependency openssl found: YES 3.1.1 00:02:17.000 Run-time dependency libpcap found: YES 1.10.4 00:02:17.000 Has header "pcap.h" with dependency libpcap: YES 00:02:17.000 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.000 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.000 Compiler for C supports arguments -Wformat: YES 00:02:17.000 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:17.000 Compiler for C supports arguments -Wformat-security: YES 00:02:17.000 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.000 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.001 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.001 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.001 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.001 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.001 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.001 Compiler for C supports arguments -Wundef: YES 00:02:17.001 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.001 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.001 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:17.001 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.001 Program objdump found: YES (/usr/bin/objdump) 00:02:17.001 Compiler for C supports arguments -mavx512f: YES 00:02:17.001 Checking if "AVX512 checking" compiles: YES 00:02:17.001 Fetching value of define "__SSE4_2__" : 1 00:02:17.001 Fetching value of define "__AES__" : 1 00:02:17.001 Fetching value of define "__AVX__" : 1 00:02:17.001 Fetching value of define "__AVX2__" : 1 00:02:17.001 Fetching value of define "__AVX512BW__" : 1 00:02:17.001 Fetching value of define "__AVX512CD__" : 1 00:02:17.001 Fetching value of define "__AVX512DQ__" : 1 00:02:17.001 Fetching value of define "__AVX512F__" : 1 00:02:17.001 Fetching value of define "__AVX512VL__" : 1 00:02:17.001 Fetching value of define "__PCLMUL__" : 1 00:02:17.001 Fetching value of define "__RDRND__" : 1 00:02:17.001 Fetching value of define "__RDSEED__" : 1 00:02:17.001 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.001 Fetching value of define "__znver1__" : (undefined) 00:02:17.001 Fetching value of define "__znver2__" : (undefined) 00:02:17.001 Fetching value of define "__znver3__" : (undefined) 00:02:17.001 Fetching value of define "__znver4__" : (undefined) 00:02:17.001 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:17.001 Message: lib/log: Defining dependency "log" 00:02:17.001 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.001 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.001 Checking for function "getentropy" : NO 00:02:17.001 Message: lib/eal: Defining dependency "eal" 00:02:17.001 Message: lib/ring: Defining dependency "ring" 00:02:17.001 Message: lib/rcu: Defining dependency "rcu" 00:02:17.001 Message: lib/mempool: Defining dependency "mempool" 00:02:17.001 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.001 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.001 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.001 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.001 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.001 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.001 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.001 Compiler for C supports arguments -mpclmul: YES 00:02:17.001 Compiler for C supports arguments -maes: YES 00:02:17.001 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.001 Compiler for C supports arguments -mavx512bw: YES 00:02:17.001 Compiler for C supports arguments -mavx512dq: YES 00:02:17.001 Compiler for C supports arguments -mavx512vl: YES 00:02:17.001 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.001 Compiler for C supports arguments -mavx2: YES 00:02:17.001 Compiler for C supports arguments -mavx: YES 00:02:17.001 Message: lib/net: Defining dependency "net" 00:02:17.001 Message: lib/meter: Defining dependency "meter" 00:02:17.001 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.001 Message: lib/pci: Defining dependency "pci" 00:02:17.001 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.001 Message: lib/hash: Defining dependency "hash" 00:02:17.001 Message: lib/timer: Defining dependency "timer" 00:02:17.001 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.001 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.001 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.001 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.001 Message: lib/power: Defining dependency "power" 00:02:17.001 Message: lib/reorder: Defining dependency "reorder" 00:02:17.001 Message: lib/security: Defining dependency "security" 00:02:17.001 Has header "linux/userfaultfd.h" : YES 00:02:17.001 Has header "linux/vduse.h" : YES 00:02:17.001 Message: lib/vhost: Defining dependency "vhost" 00:02:17.001 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:17.001 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.001 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.001 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.001 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.001 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.001 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.001 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.001 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.001 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.001 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.001 Configuring doxy-api-html.conf using configuration 00:02:17.001 Configuring doxy-api-man.conf using configuration 00:02:17.001 Program mandb found: YES (/usr/bin/mandb) 00:02:17.001 Program sphinx-build found: NO 00:02:17.001 Configuring rte_build_config.h using configuration 00:02:17.001 Message: 00:02:17.001 ================= 00:02:17.001 Applications Enabled 00:02:17.001 ================= 00:02:17.001 00:02:17.001 apps: 00:02:17.001 00:02:17.001 00:02:17.001 Message: 00:02:17.001 ================= 00:02:17.001 Libraries Enabled 00:02:17.001 ================= 00:02:17.001 00:02:17.001 libs: 00:02:17.001 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.001 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.001 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.001 00:02:17.001 Message: 00:02:17.001 =============== 00:02:17.001 Drivers Enabled 00:02:17.001 =============== 00:02:17.001 00:02:17.001 common: 00:02:17.001 00:02:17.001 bus: 00:02:17.001 pci, vdev, 00:02:17.001 mempool: 00:02:17.001 ring, 00:02:17.001 dma: 00:02:17.001 00:02:17.001 net: 00:02:17.001 00:02:17.001 crypto: 00:02:17.001 00:02:17.001 compress: 00:02:17.001 00:02:17.001 vdpa: 00:02:17.001 00:02:17.001 00:02:17.001 Message: 00:02:17.001 ================= 00:02:17.001 Content Skipped 00:02:17.001 ================= 00:02:17.001 00:02:17.001 apps: 00:02:17.001 dumpcap: explicitly disabled via build config 00:02:17.001 graph: explicitly disabled via build config 00:02:17.001 pdump: explicitly disabled via build config 00:02:17.001 proc-info: explicitly disabled via build config 00:02:17.001 test-acl: explicitly disabled via build config 00:02:17.001 test-bbdev: explicitly disabled via build config 00:02:17.001 test-cmdline: explicitly disabled via build config 00:02:17.001 test-compress-perf: explicitly disabled via build config 00:02:17.001 test-crypto-perf: explicitly disabled via build config 00:02:17.001 test-dma-perf: explicitly disabled via build config 00:02:17.001 test-eventdev: explicitly disabled via build config 00:02:17.001 test-fib: explicitly disabled via build config 00:02:17.001 test-flow-perf: explicitly disabled via build config 00:02:17.001 test-gpudev: explicitly disabled via build config 00:02:17.001 test-mldev: explicitly disabled via build config 00:02:17.001 test-pipeline: explicitly disabled via build config 00:02:17.001 test-pmd: explicitly disabled via build config 00:02:17.001 test-regex: explicitly disabled via build config 00:02:17.001 test-sad: explicitly disabled via build config 00:02:17.001 test-security-perf: explicitly disabled via build config 00:02:17.001 00:02:17.001 libs: 00:02:17.001 argparse: explicitly disabled via build config 00:02:17.001 metrics: explicitly disabled via build config 00:02:17.001 acl: explicitly disabled via build config 00:02:17.001 bbdev: explicitly disabled via build config 00:02:17.001 bitratestats: explicitly disabled via build config 00:02:17.001 bpf: explicitly disabled via build config 00:02:17.001 cfgfile: explicitly disabled via build config 00:02:17.001 distributor: explicitly disabled via build config 00:02:17.001 efd: explicitly disabled via build config 00:02:17.001 eventdev: explicitly disabled via build config 00:02:17.001 dispatcher: explicitly disabled via build config 00:02:17.001 gpudev: explicitly disabled via build config 00:02:17.001 gro: explicitly disabled via build config 00:02:17.001 gso: explicitly disabled via build config 00:02:17.001 ip_frag: explicitly disabled via build config 00:02:17.001 jobstats: explicitly disabled via build config 00:02:17.001 latencystats: explicitly disabled via build config 00:02:17.001 lpm: explicitly disabled via build config 00:02:17.001 member: explicitly disabled via build config 00:02:17.001 pcapng: explicitly disabled via build config 00:02:17.001 rawdev: explicitly disabled via build config 00:02:17.001 regexdev: explicitly disabled via build config 00:02:17.001 mldev: explicitly disabled via build config 00:02:17.001 rib: explicitly disabled via build config 00:02:17.001 sched: explicitly disabled via build config 00:02:17.001 stack: explicitly disabled via build config 00:02:17.001 ipsec: explicitly disabled via build config 00:02:17.001 pdcp: explicitly disabled via build config 00:02:17.001 fib: explicitly disabled via build config 00:02:17.001 port: explicitly disabled via build config 00:02:17.001 pdump: explicitly disabled via build config 00:02:17.001 table: explicitly disabled via build config 00:02:17.001 pipeline: explicitly disabled via build config 00:02:17.001 graph: explicitly disabled via build config 00:02:17.001 node: explicitly disabled via build config 00:02:17.001 00:02:17.001 drivers: 00:02:17.001 common/cpt: not in enabled drivers build config 00:02:17.001 common/dpaax: not in enabled drivers build config 00:02:17.001 common/iavf: not in enabled drivers build config 00:02:17.001 common/idpf: not in enabled drivers build config 00:02:17.001 common/ionic: not in enabled drivers build config 00:02:17.001 common/mvep: not in enabled drivers build config 00:02:17.001 common/octeontx: not in enabled drivers build config 00:02:17.001 bus/auxiliary: not in enabled drivers build config 00:02:17.001 bus/cdx: not in enabled drivers build config 00:02:17.001 bus/dpaa: not in enabled drivers build config 00:02:17.001 bus/fslmc: not in enabled drivers build config 00:02:17.001 bus/ifpga: not in enabled drivers build config 00:02:17.001 bus/platform: not in enabled drivers build config 00:02:17.002 bus/uacce: not in enabled drivers build config 00:02:17.002 bus/vmbus: not in enabled drivers build config 00:02:17.002 common/cnxk: not in enabled drivers build config 00:02:17.002 common/mlx5: not in enabled drivers build config 00:02:17.002 common/nfp: not in enabled drivers build config 00:02:17.002 common/nitrox: not in enabled drivers build config 00:02:17.002 common/qat: not in enabled drivers build config 00:02:17.002 common/sfc_efx: not in enabled drivers build config 00:02:17.002 mempool/bucket: not in enabled drivers build config 00:02:17.002 mempool/cnxk: not in enabled drivers build config 00:02:17.002 mempool/dpaa: not in enabled drivers build config 00:02:17.002 mempool/dpaa2: not in enabled drivers build config 00:02:17.002 mempool/octeontx: not in enabled drivers build config 00:02:17.002 mempool/stack: not in enabled drivers build config 00:02:17.002 dma/cnxk: not in enabled drivers build config 00:02:17.002 dma/dpaa: not in enabled drivers build config 00:02:17.002 dma/dpaa2: not in enabled drivers build config 00:02:17.002 dma/hisilicon: not in enabled drivers build config 00:02:17.002 dma/idxd: not in enabled drivers build config 00:02:17.002 dma/ioat: not in enabled drivers build config 00:02:17.002 dma/skeleton: not in enabled drivers build config 00:02:17.002 net/af_packet: not in enabled drivers build config 00:02:17.002 net/af_xdp: not in enabled drivers build config 00:02:17.002 net/ark: not in enabled drivers build config 00:02:17.002 net/atlantic: not in enabled drivers build config 00:02:17.002 net/avp: not in enabled drivers build config 00:02:17.002 net/axgbe: not in enabled drivers build config 00:02:17.002 net/bnx2x: not in enabled drivers build config 00:02:17.002 net/bnxt: not in enabled drivers build config 00:02:17.002 net/bonding: not in enabled drivers build config 00:02:17.002 net/cnxk: not in enabled drivers build config 00:02:17.002 net/cpfl: not in enabled drivers build config 00:02:17.002 net/cxgbe: not in enabled drivers build config 00:02:17.002 net/dpaa: not in enabled drivers build config 00:02:17.002 net/dpaa2: not in enabled drivers build config 00:02:17.002 net/e1000: not in enabled drivers build config 00:02:17.002 net/ena: not in enabled drivers build config 00:02:17.002 net/enetc: not in enabled drivers build config 00:02:17.002 net/enetfec: not in enabled drivers build config 00:02:17.002 net/enic: not in enabled drivers build config 00:02:17.002 net/failsafe: not in enabled drivers build config 00:02:17.002 net/fm10k: not in enabled drivers build config 00:02:17.002 net/gve: not in enabled drivers build config 00:02:17.002 net/hinic: not in enabled drivers build config 00:02:17.002 net/hns3: not in enabled drivers build config 00:02:17.002 net/i40e: not in enabled drivers build config 00:02:17.002 net/iavf: not in enabled drivers build config 00:02:17.002 net/ice: not in enabled drivers build config 00:02:17.002 net/idpf: not in enabled drivers build config 00:02:17.002 net/igc: not in enabled drivers build config 00:02:17.002 net/ionic: not in enabled drivers build config 00:02:17.002 net/ipn3ke: not in enabled drivers build config 00:02:17.002 net/ixgbe: not in enabled drivers build config 00:02:17.002 net/mana: not in enabled drivers build config 00:02:17.002 net/memif: not in enabled drivers build config 00:02:17.002 net/mlx4: not in enabled drivers build config 00:02:17.002 net/mlx5: not in enabled drivers build config 00:02:17.002 net/mvneta: not in enabled drivers build config 00:02:17.002 net/mvpp2: not in enabled drivers build config 00:02:17.002 net/netvsc: not in enabled drivers build config 00:02:17.002 net/nfb: not in enabled drivers build config 00:02:17.002 net/nfp: not in enabled drivers build config 00:02:17.002 net/ngbe: not in enabled drivers build config 00:02:17.002 net/null: not in enabled drivers build config 00:02:17.002 net/octeontx: not in enabled drivers build config 00:02:17.002 net/octeon_ep: not in enabled drivers build config 00:02:17.002 net/pcap: not in enabled drivers build config 00:02:17.002 net/pfe: not in enabled drivers build config 00:02:17.002 net/qede: not in enabled drivers build config 00:02:17.002 net/ring: not in enabled drivers build config 00:02:17.002 net/sfc: not in enabled drivers build config 00:02:17.002 net/softnic: not in enabled drivers build config 00:02:17.002 net/tap: not in enabled drivers build config 00:02:17.002 net/thunderx: not in enabled drivers build config 00:02:17.002 net/txgbe: not in enabled drivers build config 00:02:17.002 net/vdev_netvsc: not in enabled drivers build config 00:02:17.002 net/vhost: not in enabled drivers build config 00:02:17.002 net/virtio: not in enabled drivers build config 00:02:17.002 net/vmxnet3: not in enabled drivers build config 00:02:17.002 raw/*: missing internal dependency, "rawdev" 00:02:17.002 crypto/armv8: not in enabled drivers build config 00:02:17.002 crypto/bcmfs: not in enabled drivers build config 00:02:17.002 crypto/caam_jr: not in enabled drivers build config 00:02:17.002 crypto/ccp: not in enabled drivers build config 00:02:17.002 crypto/cnxk: not in enabled drivers build config 00:02:17.002 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.002 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.002 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.002 crypto/mlx5: not in enabled drivers build config 00:02:17.002 crypto/mvsam: not in enabled drivers build config 00:02:17.002 crypto/nitrox: not in enabled drivers build config 00:02:17.002 crypto/null: not in enabled drivers build config 00:02:17.002 crypto/octeontx: not in enabled drivers build config 00:02:17.002 crypto/openssl: not in enabled drivers build config 00:02:17.002 crypto/scheduler: not in enabled drivers build config 00:02:17.002 crypto/uadk: not in enabled drivers build config 00:02:17.002 crypto/virtio: not in enabled drivers build config 00:02:17.002 compress/isal: not in enabled drivers build config 00:02:17.002 compress/mlx5: not in enabled drivers build config 00:02:17.002 compress/nitrox: not in enabled drivers build config 00:02:17.002 compress/octeontx: not in enabled drivers build config 00:02:17.002 compress/zlib: not in enabled drivers build config 00:02:17.002 regex/*: missing internal dependency, "regexdev" 00:02:17.002 ml/*: missing internal dependency, "mldev" 00:02:17.002 vdpa/ifc: not in enabled drivers build config 00:02:17.002 vdpa/mlx5: not in enabled drivers build config 00:02:17.002 vdpa/nfp: not in enabled drivers build config 00:02:17.002 vdpa/sfc: not in enabled drivers build config 00:02:17.002 event/*: missing internal dependency, "eventdev" 00:02:17.002 baseband/*: missing internal dependency, "bbdev" 00:02:17.002 gpu/*: missing internal dependency, "gpudev" 00:02:17.002 00:02:17.002 00:02:17.261 Build targets in project: 85 00:02:17.261 00:02:17.261 DPDK 24.03.0 00:02:17.261 00:02:17.261 User defined options 00:02:17.261 buildtype : debug 00:02:17.261 default_library : static 00:02:17.261 libdir : lib 00:02:17.261 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:17.261 c_args : -fPIC -Werror 00:02:17.261 c_link_args : 00:02:17.261 cpu_instruction_set: native 00:02:17.261 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:17.261 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:17.261 enable_docs : false 00:02:17.261 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:17.261 enable_kmods : false 00:02:17.261 max_lcores : 128 00:02:17.261 tests : false 00:02:17.261 00:02:17.261 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.835 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:17.835 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.835 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.835 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.835 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.835 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.836 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.836 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.836 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.836 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.836 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.836 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.836 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.836 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.836 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.836 [15/268] Linking static target lib/librte_kvargs.a 00:02:17.836 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.836 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.836 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.836 [19/268] Linking static target lib/librte_log.a 00:02:17.836 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:17.836 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:17.836 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:17.836 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:17.836 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:17.836 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.836 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.836 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:17.836 [28/268] Linking static target lib/librte_pci.a 00:02:17.836 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.836 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:18.097 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:18.097 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:18.097 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.097 [34/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.097 [35/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.356 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.356 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.356 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.356 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.356 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.356 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.356 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:18.356 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.356 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.356 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.356 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.356 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.356 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.356 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.356 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.356 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.356 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.356 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.356 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.356 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.356 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.356 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.356 [58/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.356 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.356 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.356 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.356 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.356 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.356 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.356 [65/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.356 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.356 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.356 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.356 [69/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.356 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.356 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.356 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.356 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.356 [74/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.356 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.356 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.356 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.356 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.356 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.356 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.356 [81/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.356 [82/268] Linking static target lib/librte_telemetry.a 00:02:18.356 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.356 [84/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.356 [85/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.356 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.356 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.356 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.356 [89/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.356 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.356 [91/268] Linking static target lib/librte_meter.a 00:02:18.356 [92/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.356 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.356 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.356 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.356 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.356 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:18.356 [98/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.356 [99/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.356 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.356 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.356 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.356 [103/268] Linking static target lib/librte_ring.a 00:02:18.356 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.616 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.616 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.616 [107/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:18.616 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:18.616 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:18.616 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.616 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:18.616 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.616 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.616 [114/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.616 [115/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.616 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.616 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.616 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.616 [119/268] Linking static target lib/librte_cmdline.a 00:02:18.616 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:18.616 [121/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.616 [122/268] Linking static target lib/librte_timer.a 00:02:18.616 [123/268] Linking static target lib/librte_net.a 00:02:18.616 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.616 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:18.616 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:18.616 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.616 [128/268] Linking static target lib/librte_eal.a 00:02:18.616 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:18.616 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.616 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.616 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:18.616 [133/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.616 [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.616 [135/268] Linking static target lib/librte_rcu.a 00:02:18.616 [136/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.616 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.616 [138/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.616 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.616 [140/268] Linking static target lib/librte_mempool.a 00:02:18.616 [141/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.616 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.616 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.616 [144/268] Linking static target lib/librte_compressdev.a 00:02:18.616 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.616 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.616 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.616 [148/268] Linking static target lib/librte_mbuf.a 00:02:18.616 [149/268] Linking static target lib/librte_dmadev.a 00:02:18.616 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.616 [151/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.616 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.616 [153/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.616 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.616 [155/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.616 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.616 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.616 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.616 [159/268] Linking static target lib/librte_hash.a 00:02:18.876 [160/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.876 [161/268] Linking target lib/librte_log.so.24.1 00:02:18.876 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.876 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.876 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.876 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:18.876 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.876 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.876 [168/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.876 [169/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.876 [170/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.876 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.876 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.876 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.876 [174/268] Linking static target lib/librte_power.a 00:02:18.876 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.876 [176/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.876 [177/268] Linking static target lib/librte_reorder.a 00:02:18.876 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.876 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.876 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.876 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.876 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.876 [183/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:18.876 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.876 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.876 [186/268] Linking static target lib/librte_cryptodev.a 00:02:18.876 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.876 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.876 [189/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.876 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:18.877 [191/268] Linking static target lib/librte_security.a 00:02:18.877 [192/268] Linking target lib/librte_kvargs.so.24.1 00:02:18.877 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:18.877 [194/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:19.136 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:19.136 [196/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.136 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.136 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.136 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:19.136 [200/268] Linking target lib/librte_telemetry.so.24.1 00:02:19.136 [201/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:19.136 [202/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.136 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.136 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:19.136 [205/268] Linking static target drivers/librte_mempool_ring.a 00:02:19.136 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.136 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:19.136 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:19.136 [209/268] Linking static target drivers/librte_bus_pci.a 00:02:19.136 [210/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:19.136 [211/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.136 [212/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:19.136 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:19.395 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:19.395 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.395 [216/268] Linking static target lib/librte_ethdev.a 00:02:19.395 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.395 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.395 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.395 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.654 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.654 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.654 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.913 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.913 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.913 [226/268] Linking static target lib/librte_vhost.a 00:02:19.913 [227/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.913 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.171 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.106 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.041 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.166 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.080 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.080 [234/268] Linking target lib/librte_eal.so.24.1 00:02:32.080 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:32.080 [236/268] Linking target lib/librte_pci.so.24.1 00:02:32.080 [237/268] Linking target lib/librte_meter.so.24.1 00:02:32.080 [238/268] Linking target lib/librte_ring.so.24.1 00:02:32.080 [239/268] Linking target lib/librte_timer.so.24.1 00:02:32.080 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:32.080 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.080 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.080 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.080 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.080 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.080 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.339 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.339 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:32.339 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:32.339 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:32.339 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:32.339 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:32.339 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:32.599 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:32.599 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:32.599 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:32.599 [257/268] Linking target lib/librte_net.so.24.1 00:02:32.599 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:32.858 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:32.858 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:32.858 [261/268] Linking target lib/librte_hash.so.24.1 00:02:32.858 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:32.858 [263/268] Linking target lib/librte_security.so.24.1 00:02:32.858 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:33.118 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:33.118 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:33.118 [267/268] Linking target lib/librte_power.so.24.1 00:02:33.118 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:33.118 INFO: autodetecting backend as ninja 00:02:33.118 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:34.056 CC lib/ut_mock/mock.o 00:02:34.056 CC lib/ut/ut.o 00:02:34.056 CC lib/log/log.o 00:02:34.056 CC lib/log/log_flags.o 00:02:34.056 CC lib/log/log_deprecated.o 00:02:34.315 LIB libspdk_ut_mock.a 00:02:34.315 LIB libspdk_ut.a 00:02:34.315 LIB libspdk_log.a 00:02:34.574 CXX lib/trace_parser/trace.o 00:02:34.574 CC lib/dma/dma.o 00:02:34.574 CC lib/ioat/ioat.o 00:02:34.574 CC lib/util/base64.o 00:02:34.574 CC lib/util/bit_array.o 00:02:34.574 CC lib/util/cpuset.o 00:02:34.574 CC lib/util/crc16.o 00:02:34.574 CC lib/util/crc32.o 00:02:34.574 CC lib/util/crc32c.o 00:02:34.574 CC lib/util/crc32_ieee.o 00:02:34.574 CC lib/util/crc64.o 00:02:34.574 CC lib/util/dif.o 00:02:34.574 CC lib/util/fd.o 00:02:34.574 CC lib/util/fd_group.o 00:02:34.574 CC lib/util/file.o 00:02:34.574 CC lib/util/hexlify.o 00:02:34.574 CC lib/util/iov.o 00:02:34.574 CC lib/util/math.o 00:02:34.574 CC lib/util/net.o 00:02:34.574 CC lib/util/pipe.o 00:02:34.574 CC lib/util/strerror_tls.o 00:02:34.574 CC lib/util/string.o 00:02:34.574 CC lib/util/uuid.o 00:02:34.574 CC lib/util/xor.o 00:02:34.574 CC lib/util/zipf.o 00:02:34.574 CC lib/util/md5.o 00:02:34.832 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.832 CC lib/vfio_user/host/vfio_user.o 00:02:34.832 LIB libspdk_dma.a 00:02:34.832 LIB libspdk_ioat.a 00:02:34.832 LIB libspdk_vfio_user.a 00:02:35.091 LIB libspdk_util.a 00:02:35.091 LIB libspdk_trace_parser.a 00:02:35.350 CC lib/json/json_parse.o 00:02:35.350 CC lib/json/json_util.o 00:02:35.350 CC lib/json/json_write.o 00:02:35.350 CC lib/rdma_utils/rdma_utils.o 00:02:35.350 CC lib/vmd/vmd.o 00:02:35.350 CC lib/vmd/led.o 00:02:35.350 CC lib/idxd/idxd.o 00:02:35.350 CC lib/idxd/idxd_user.o 00:02:35.350 CC lib/idxd/idxd_kernel.o 00:02:35.350 CC lib/env_dpdk/env.o 00:02:35.350 CC lib/env_dpdk/memory.o 00:02:35.350 CC lib/env_dpdk/pci.o 00:02:35.350 CC lib/conf/conf.o 00:02:35.350 CC lib/env_dpdk/init.o 00:02:35.350 CC lib/env_dpdk/threads.o 00:02:35.350 CC lib/env_dpdk/pci_ioat.o 00:02:35.350 CC lib/env_dpdk/pci_virtio.o 00:02:35.350 CC lib/env_dpdk/pci_vmd.o 00:02:35.350 CC lib/env_dpdk/pci_idxd.o 00:02:35.350 CC lib/env_dpdk/pci_event.o 00:02:35.350 CC lib/env_dpdk/sigbus_handler.o 00:02:35.350 CC lib/env_dpdk/pci_dpdk.o 00:02:35.350 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.350 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.611 LIB libspdk_conf.a 00:02:35.611 LIB libspdk_rdma_utils.a 00:02:35.611 LIB libspdk_json.a 00:02:35.611 LIB libspdk_idxd.a 00:02:35.611 LIB libspdk_vmd.a 00:02:35.873 CC lib/rdma_provider/common.o 00:02:35.873 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.873 CC lib/jsonrpc/jsonrpc_server.o 00:02:35.873 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:35.873 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.873 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.133 LIB libspdk_rdma_provider.a 00:02:36.133 LIB libspdk_jsonrpc.a 00:02:36.392 LIB libspdk_env_dpdk.a 00:02:36.392 CC lib/rpc/rpc.o 00:02:36.651 LIB libspdk_rpc.a 00:02:36.910 CC lib/trace/trace.o 00:02:36.910 CC lib/trace/trace_flags.o 00:02:36.910 CC lib/trace/trace_rpc.o 00:02:36.910 CC lib/keyring/keyring.o 00:02:36.910 CC lib/notify/notify.o 00:02:36.910 CC lib/keyring/keyring_rpc.o 00:02:36.910 CC lib/notify/notify_rpc.o 00:02:37.170 LIB libspdk_notify.a 00:02:37.170 LIB libspdk_trace.a 00:02:37.170 LIB libspdk_keyring.a 00:02:37.429 CC lib/sock/sock.o 00:02:37.429 CC lib/sock/sock_rpc.o 00:02:37.429 CC lib/thread/thread.o 00:02:37.429 CC lib/thread/iobuf.o 00:02:37.687 LIB libspdk_sock.a 00:02:37.946 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:37.946 CC lib/nvme/nvme_ctrlr.o 00:02:37.946 CC lib/nvme/nvme_fabric.o 00:02:37.946 CC lib/nvme/nvme_ns_cmd.o 00:02:37.946 CC lib/nvme/nvme_ns.o 00:02:37.946 CC lib/nvme/nvme_pcie_common.o 00:02:37.946 CC lib/nvme/nvme_pcie.o 00:02:37.946 CC lib/nvme/nvme_qpair.o 00:02:37.946 CC lib/nvme/nvme.o 00:02:37.946 CC lib/nvme/nvme_quirks.o 00:02:37.946 CC lib/nvme/nvme_transport.o 00:02:37.946 CC lib/nvme/nvme_discovery.o 00:02:38.206 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.206 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.206 CC lib/nvme/nvme_tcp.o 00:02:38.206 CC lib/nvme/nvme_opal.o 00:02:38.206 CC lib/nvme/nvme_io_msg.o 00:02:38.206 CC lib/nvme/nvme_poll_group.o 00:02:38.206 CC lib/nvme/nvme_zns.o 00:02:38.206 CC lib/nvme/nvme_stubs.o 00:02:38.206 CC lib/nvme/nvme_auth.o 00:02:38.206 CC lib/nvme/nvme_cuse.o 00:02:38.206 CC lib/nvme/nvme_vfio_user.o 00:02:38.206 CC lib/nvme/nvme_rdma.o 00:02:38.206 LIB libspdk_thread.a 00:02:38.465 CC lib/fsdev/fsdev.o 00:02:38.465 CC lib/fsdev/fsdev_io.o 00:02:38.465 CC lib/fsdev/fsdev_rpc.o 00:02:38.465 CC lib/virtio/virtio.o 00:02:38.465 CC lib/virtio/virtio_vhost_user.o 00:02:38.465 CC lib/virtio/virtio_vfio_user.o 00:02:38.465 CC lib/virtio/virtio_pci.o 00:02:38.465 CC lib/accel/accel.o 00:02:38.465 CC lib/accel/accel_rpc.o 00:02:38.465 CC lib/accel/accel_sw.o 00:02:38.465 CC lib/init/json_config.o 00:02:38.465 CC lib/init/subsystem.o 00:02:38.465 CC lib/init/subsystem_rpc.o 00:02:38.465 CC lib/init/rpc.o 00:02:38.465 CC lib/vfu_tgt/tgt_endpoint.o 00:02:38.465 CC lib/vfu_tgt/tgt_rpc.o 00:02:38.465 CC lib/blob/blobstore.o 00:02:38.465 CC lib/blob/request.o 00:02:38.465 CC lib/blob/zeroes.o 00:02:38.724 CC lib/blob/blob_bs_dev.o 00:02:38.724 LIB libspdk_init.a 00:02:38.724 LIB libspdk_virtio.a 00:02:38.724 LIB libspdk_vfu_tgt.a 00:02:38.983 LIB libspdk_fsdev.a 00:02:38.983 CC lib/event/app.o 00:02:38.983 CC lib/event/reactor.o 00:02:38.983 CC lib/event/log_rpc.o 00:02:38.983 CC lib/event/app_rpc.o 00:02:38.983 CC lib/event/scheduler_static.o 00:02:39.242 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:39.242 LIB libspdk_accel.a 00:02:39.242 LIB libspdk_event.a 00:02:39.501 LIB libspdk_nvme.a 00:02:39.760 LIB libspdk_fuse_dispatcher.a 00:02:39.760 CC lib/bdev/bdev.o 00:02:39.760 CC lib/bdev/bdev_rpc.o 00:02:39.760 CC lib/bdev/bdev_zone.o 00:02:39.760 CC lib/bdev/part.o 00:02:39.760 CC lib/bdev/scsi_nvme.o 00:02:40.328 LIB libspdk_blob.a 00:02:40.587 CC lib/blobfs/blobfs.o 00:02:40.587 CC lib/blobfs/tree.o 00:02:40.587 CC lib/lvol/lvol.o 00:02:41.156 LIB libspdk_lvol.a 00:02:41.156 LIB libspdk_blobfs.a 00:02:41.415 LIB libspdk_bdev.a 00:02:41.680 CC lib/nvmf/ctrlr.o 00:02:41.680 CC lib/nvmf/ctrlr_discovery.o 00:02:41.680 CC lib/nvmf/ctrlr_bdev.o 00:02:41.680 CC lib/nvmf/subsystem.o 00:02:41.680 CC lib/nvmf/nvmf.o 00:02:41.680 CC lib/nvmf/nvmf_rpc.o 00:02:41.680 CC lib/nvmf/transport.o 00:02:41.680 CC lib/nvmf/tcp.o 00:02:41.680 CC lib/nvmf/stubs.o 00:02:41.680 CC lib/nvmf/mdns_server.o 00:02:41.680 CC lib/nvmf/rdma.o 00:02:41.680 CC lib/nvmf/vfio_user.o 00:02:41.681 CC lib/nvmf/auth.o 00:02:41.681 CC lib/nbd/nbd.o 00:02:41.681 CC lib/nbd/nbd_rpc.o 00:02:41.681 CC lib/ublk/ublk.o 00:02:41.681 CC lib/ftl/ftl_core.o 00:02:41.681 CC lib/ublk/ublk_rpc.o 00:02:41.681 CC lib/ftl/ftl_init.o 00:02:41.681 CC lib/ftl/ftl_layout.o 00:02:41.681 CC lib/ftl/ftl_debug.o 00:02:41.681 CC lib/ftl/ftl_io.o 00:02:41.681 CC lib/ftl/ftl_sb.o 00:02:41.681 CC lib/ftl/ftl_l2p.o 00:02:41.681 CC lib/ftl/ftl_l2p_flat.o 00:02:41.681 CC lib/ftl/ftl_nv_cache.o 00:02:41.681 CC lib/ftl/ftl_band.o 00:02:41.681 CC lib/ftl/ftl_band_ops.o 00:02:41.681 CC lib/scsi/dev.o 00:02:41.681 CC lib/ftl/ftl_writer.o 00:02:41.681 CC lib/scsi/lun.o 00:02:41.681 CC lib/ftl/ftl_rq.o 00:02:41.681 CC lib/scsi/port.o 00:02:41.681 CC lib/ftl/ftl_reloc.o 00:02:41.681 CC lib/scsi/scsi.o 00:02:41.681 CC lib/ftl/ftl_l2p_cache.o 00:02:41.681 CC lib/ftl/ftl_p2l.o 00:02:41.681 CC lib/scsi/scsi_bdev.o 00:02:41.681 CC lib/ftl/ftl_p2l_log.o 00:02:41.681 CC lib/scsi/scsi_pr.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.681 CC lib/scsi/scsi_rpc.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.681 CC lib/scsi/task.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.681 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.681 CC lib/ftl/utils/ftl_md.o 00:02:41.681 CC lib/ftl/utils/ftl_conf.o 00:02:41.681 CC lib/ftl/utils/ftl_mempool.o 00:02:41.681 CC lib/ftl/utils/ftl_property.o 00:02:41.681 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.940 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.940 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.940 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.940 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.940 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.940 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.940 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:41.940 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:41.940 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:41.940 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:41.940 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:41.940 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:41.940 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:41.940 CC lib/ftl/base/ftl_base_dev.o 00:02:41.940 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.940 CC lib/ftl/ftl_trace.o 00:02:42.199 LIB libspdk_nbd.a 00:02:42.199 LIB libspdk_scsi.a 00:02:42.199 LIB libspdk_ublk.a 00:02:42.458 LIB libspdk_ftl.a 00:02:42.717 CC lib/vhost/vhost.o 00:02:42.717 CC lib/vhost/vhost_rpc.o 00:02:42.717 CC lib/iscsi/conn.o 00:02:42.717 CC lib/vhost/vhost_scsi.o 00:02:42.717 CC lib/iscsi/init_grp.o 00:02:42.717 CC lib/vhost/vhost_blk.o 00:02:42.717 CC lib/iscsi/iscsi.o 00:02:42.717 CC lib/vhost/rte_vhost_user.o 00:02:42.717 CC lib/iscsi/param.o 00:02:42.717 CC lib/iscsi/portal_grp.o 00:02:42.717 CC lib/iscsi/tgt_node.o 00:02:42.717 CC lib/iscsi/iscsi_subsystem.o 00:02:42.717 CC lib/iscsi/iscsi_rpc.o 00:02:42.717 CC lib/iscsi/task.o 00:02:42.977 LIB libspdk_nvmf.a 00:02:43.237 LIB libspdk_vhost.a 00:02:43.237 LIB libspdk_iscsi.a 00:02:43.805 CC module/vfu_device/vfu_virtio.o 00:02:43.805 CC module/vfu_device/vfu_virtio_blk.o 00:02:43.805 CC module/vfu_device/vfu_virtio_rpc.o 00:02:43.805 CC module/vfu_device/vfu_virtio_scsi.o 00:02:43.805 CC module/vfu_device/vfu_virtio_fs.o 00:02:43.805 CC module/env_dpdk/env_dpdk_rpc.o 00:02:44.065 LIB libspdk_env_dpdk_rpc.a 00:02:44.065 CC module/accel/iaa/accel_iaa.o 00:02:44.065 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.065 CC module/scheduler/gscheduler/gscheduler.o 00:02:44.065 CC module/blob/bdev/blob_bdev.o 00:02:44.065 CC module/fsdev/aio/fsdev_aio.o 00:02:44.065 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:44.065 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:44.065 CC module/fsdev/aio/linux_aio_mgr.o 00:02:44.065 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:44.065 CC module/sock/posix/posix.o 00:02:44.065 CC module/keyring/linux/keyring.o 00:02:44.065 CC module/accel/error/accel_error.o 00:02:44.065 CC module/keyring/linux/keyring_rpc.o 00:02:44.065 CC module/accel/ioat/accel_ioat.o 00:02:44.065 CC module/accel/error/accel_error_rpc.o 00:02:44.065 CC module/keyring/file/keyring.o 00:02:44.065 CC module/accel/ioat/accel_ioat_rpc.o 00:02:44.065 CC module/keyring/file/keyring_rpc.o 00:02:44.065 CC module/accel/dsa/accel_dsa_rpc.o 00:02:44.065 CC module/accel/dsa/accel_dsa.o 00:02:44.065 LIB libspdk_scheduler_gscheduler.a 00:02:44.065 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.065 LIB libspdk_keyring_linux.a 00:02:44.065 LIB libspdk_keyring_file.a 00:02:44.065 LIB libspdk_scheduler_dynamic.a 00:02:44.065 LIB libspdk_accel_iaa.a 00:02:44.065 LIB libspdk_accel_ioat.a 00:02:44.065 LIB libspdk_accel_error.a 00:02:44.324 LIB libspdk_blob_bdev.a 00:02:44.324 LIB libspdk_accel_dsa.a 00:02:44.324 LIB libspdk_vfu_device.a 00:02:44.583 LIB libspdk_fsdev_aio.a 00:02:44.583 LIB libspdk_sock_posix.a 00:02:44.842 CC module/bdev/gpt/gpt.o 00:02:44.842 CC module/bdev/malloc/bdev_malloc.o 00:02:44.842 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.842 CC module/bdev/null/bdev_null.o 00:02:44.842 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.842 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.842 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.842 CC module/bdev/null/bdev_null_rpc.o 00:02:44.842 CC module/bdev/error/vbdev_error.o 00:02:44.842 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.842 CC module/bdev/nvme/bdev_nvme.o 00:02:44.842 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.842 CC module/bdev/nvme/nvme_rpc.o 00:02:44.842 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.842 CC module/bdev/ftl/bdev_ftl.o 00:02:44.842 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.842 CC module/bdev/nvme/vbdev_opal.o 00:02:44.842 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.842 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:44.842 CC module/bdev/iscsi/bdev_iscsi.o 00:02:44.842 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.842 CC module/bdev/delay/vbdev_delay.o 00:02:44.842 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.842 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.842 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.842 CC module/bdev/split/vbdev_split.o 00:02:44.842 CC module/bdev/split/vbdev_split_rpc.o 00:02:44.842 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:44.842 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.842 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.842 CC module/bdev/raid/bdev_raid.o 00:02:44.842 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.842 CC module/bdev/raid/bdev_raid_sb.o 00:02:44.842 CC module/bdev/raid/bdev_raid_rpc.o 00:02:44.842 CC module/bdev/raid/raid0.o 00:02:44.842 CC module/bdev/raid/raid1.o 00:02:44.842 CC module/bdev/raid/concat.o 00:02:44.842 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.842 CC module/bdev/aio/bdev_aio.o 00:02:44.842 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.842 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.842 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:44.842 LIB libspdk_blobfs_bdev.a 00:02:44.842 LIB libspdk_bdev_split.a 00:02:44.842 LIB libspdk_bdev_error.a 00:02:44.842 LIB libspdk_bdev_gpt.a 00:02:44.842 LIB libspdk_bdev_null.a 00:02:44.842 LIB libspdk_bdev_passthru.a 00:02:44.842 LIB libspdk_bdev_ftl.a 00:02:45.101 LIB libspdk_bdev_iscsi.a 00:02:45.102 LIB libspdk_bdev_zone_block.a 00:02:45.102 LIB libspdk_bdev_malloc.a 00:02:45.102 LIB libspdk_bdev_aio.a 00:02:45.102 LIB libspdk_bdev_delay.a 00:02:45.102 LIB libspdk_bdev_lvol.a 00:02:45.102 LIB libspdk_bdev_virtio.a 00:02:45.361 LIB libspdk_bdev_raid.a 00:02:46.299 LIB libspdk_bdev_nvme.a 00:02:46.866 CC module/event/subsystems/iobuf/iobuf.o 00:02:46.866 CC module/event/subsystems/sock/sock.o 00:02:46.866 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:46.866 CC module/event/subsystems/scheduler/scheduler.o 00:02:46.866 CC module/event/subsystems/fsdev/fsdev.o 00:02:46.866 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:46.866 CC module/event/subsystems/vmd/vmd.o 00:02:46.866 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:46.866 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:46.866 CC module/event/subsystems/keyring/keyring.o 00:02:46.866 LIB libspdk_event_vhost_blk.a 00:02:46.866 LIB libspdk_event_vfu_tgt.a 00:02:46.866 LIB libspdk_event_sock.a 00:02:46.866 LIB libspdk_event_fsdev.a 00:02:46.866 LIB libspdk_event_vmd.a 00:02:46.866 LIB libspdk_event_keyring.a 00:02:46.866 LIB libspdk_event_iobuf.a 00:02:46.866 LIB libspdk_event_scheduler.a 00:02:47.435 CC module/event/subsystems/accel/accel.o 00:02:47.435 LIB libspdk_event_accel.a 00:02:47.694 CC module/event/subsystems/bdev/bdev.o 00:02:47.954 LIB libspdk_event_bdev.a 00:02:48.214 CC module/event/subsystems/scsi/scsi.o 00:02:48.214 CC module/event/subsystems/nbd/nbd.o 00:02:48.214 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:48.214 CC module/event/subsystems/ublk/ublk.o 00:02:48.214 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:48.473 LIB libspdk_event_ublk.a 00:02:48.473 LIB libspdk_event_nbd.a 00:02:48.473 LIB libspdk_event_scsi.a 00:02:48.473 LIB libspdk_event_nvmf.a 00:02:48.733 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:48.733 CC module/event/subsystems/iscsi/iscsi.o 00:02:48.993 LIB libspdk_event_vhost_scsi.a 00:02:48.993 LIB libspdk_event_iscsi.a 00:02:49.253 TEST_HEADER include/spdk/accel.h 00:02:49.253 TEST_HEADER include/spdk/accel_module.h 00:02:49.253 TEST_HEADER include/spdk/assert.h 00:02:49.253 TEST_HEADER include/spdk/barrier.h 00:02:49.253 TEST_HEADER include/spdk/bdev_module.h 00:02:49.253 TEST_HEADER include/spdk/base64.h 00:02:49.253 TEST_HEADER include/spdk/bit_array.h 00:02:49.253 TEST_HEADER include/spdk/bdev.h 00:02:49.253 TEST_HEADER include/spdk/bdev_zone.h 00:02:49.253 TEST_HEADER include/spdk/bit_pool.h 00:02:49.253 TEST_HEADER include/spdk/blob_bdev.h 00:02:49.253 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:49.253 TEST_HEADER include/spdk/blobfs.h 00:02:49.253 TEST_HEADER include/spdk/blob.h 00:02:49.253 TEST_HEADER include/spdk/conf.h 00:02:49.253 TEST_HEADER include/spdk/config.h 00:02:49.253 CC app/trace_record/trace_record.o 00:02:49.253 TEST_HEADER include/spdk/cpuset.h 00:02:49.253 TEST_HEADER include/spdk/crc16.h 00:02:49.253 TEST_HEADER include/spdk/crc32.h 00:02:49.253 TEST_HEADER include/spdk/crc64.h 00:02:49.253 TEST_HEADER include/spdk/dif.h 00:02:49.253 TEST_HEADER include/spdk/dma.h 00:02:49.253 TEST_HEADER include/spdk/endian.h 00:02:49.253 CXX app/trace/trace.o 00:02:49.253 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.253 CC app/spdk_top/spdk_top.o 00:02:49.253 TEST_HEADER include/spdk/env.h 00:02:49.253 TEST_HEADER include/spdk/fd_group.h 00:02:49.253 TEST_HEADER include/spdk/event.h 00:02:49.253 TEST_HEADER include/spdk/fd.h 00:02:49.253 TEST_HEADER include/spdk/file.h 00:02:49.253 TEST_HEADER include/spdk/fsdev.h 00:02:49.253 CC test/rpc_client/rpc_client_test.o 00:02:49.253 TEST_HEADER include/spdk/fsdev_module.h 00:02:49.253 CC app/spdk_lspci/spdk_lspci.o 00:02:49.253 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.253 CC app/spdk_nvme_identify/identify.o 00:02:49.253 TEST_HEADER include/spdk/ftl.h 00:02:49.253 TEST_HEADER include/spdk/hexlify.h 00:02:49.253 TEST_HEADER include/spdk/histogram_data.h 00:02:49.253 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.253 TEST_HEADER include/spdk/idxd.h 00:02:49.253 TEST_HEADER include/spdk/init.h 00:02:49.253 TEST_HEADER include/spdk/ioat.h 00:02:49.253 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.253 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.254 CC app/spdk_nvme_perf/perf.o 00:02:49.254 TEST_HEADER include/spdk/json.h 00:02:49.254 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.254 TEST_HEADER include/spdk/keyring.h 00:02:49.254 TEST_HEADER include/spdk/keyring_module.h 00:02:49.254 TEST_HEADER include/spdk/likely.h 00:02:49.254 TEST_HEADER include/spdk/log.h 00:02:49.254 TEST_HEADER include/spdk/lvol.h 00:02:49.254 TEST_HEADER include/spdk/md5.h 00:02:49.254 TEST_HEADER include/spdk/memory.h 00:02:49.254 CC app/spdk_nvme_discover/discovery_aer.o 00:02:49.254 TEST_HEADER include/spdk/mmio.h 00:02:49.254 TEST_HEADER include/spdk/notify.h 00:02:49.254 TEST_HEADER include/spdk/nbd.h 00:02:49.254 TEST_HEADER include/spdk/net.h 00:02:49.254 TEST_HEADER include/spdk/nvme.h 00:02:49.254 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.254 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.254 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.254 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.254 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.254 TEST_HEADER include/spdk/nvmf.h 00:02:49.254 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.254 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.254 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.254 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.254 TEST_HEADER include/spdk/opal_spec.h 00:02:49.254 TEST_HEADER include/spdk/opal.h 00:02:49.254 TEST_HEADER include/spdk/pci_ids.h 00:02:49.254 TEST_HEADER include/spdk/queue.h 00:02:49.254 TEST_HEADER include/spdk/pipe.h 00:02:49.254 CC app/spdk_dd/spdk_dd.o 00:02:49.254 TEST_HEADER include/spdk/rpc.h 00:02:49.254 TEST_HEADER include/spdk/reduce.h 00:02:49.254 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.254 TEST_HEADER include/spdk/scheduler.h 00:02:49.254 TEST_HEADER include/spdk/scsi.h 00:02:49.254 TEST_HEADER include/spdk/sock.h 00:02:49.254 TEST_HEADER include/spdk/stdinc.h 00:02:49.254 TEST_HEADER include/spdk/trace_parser.h 00:02:49.254 TEST_HEADER include/spdk/tree.h 00:02:49.254 TEST_HEADER include/spdk/string.h 00:02:49.254 TEST_HEADER include/spdk/ublk.h 00:02:49.254 TEST_HEADER include/spdk/thread.h 00:02:49.254 TEST_HEADER include/spdk/trace.h 00:02:49.254 TEST_HEADER include/spdk/util.h 00:02:49.254 TEST_HEADER include/spdk/version.h 00:02:49.254 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:49.254 TEST_HEADER include/spdk/uuid.h 00:02:49.254 CC app/nvmf_tgt/nvmf_main.o 00:02:49.254 CC app/iscsi_tgt/iscsi_tgt.o 00:02:49.254 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.254 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.254 TEST_HEADER include/spdk/vmd.h 00:02:49.254 TEST_HEADER include/spdk/vhost.h 00:02:49.254 TEST_HEADER include/spdk/zipf.h 00:02:49.254 TEST_HEADER include/spdk/xor.h 00:02:49.254 CXX test/cpp_headers/accel.o 00:02:49.254 CXX test/cpp_headers/accel_module.o 00:02:49.254 CXX test/cpp_headers/assert.o 00:02:49.254 CXX test/cpp_headers/barrier.o 00:02:49.254 CXX test/cpp_headers/base64.o 00:02:49.254 CXX test/cpp_headers/bdev.o 00:02:49.254 CXX test/cpp_headers/bdev_zone.o 00:02:49.254 CXX test/cpp_headers/bit_array.o 00:02:49.254 CXX test/cpp_headers/blob_bdev.o 00:02:49.254 CXX test/cpp_headers/bdev_module.o 00:02:49.254 CXX test/cpp_headers/bit_pool.o 00:02:49.254 CXX test/cpp_headers/blobfs_bdev.o 00:02:49.254 CXX test/cpp_headers/blobfs.o 00:02:49.254 CXX test/cpp_headers/blob.o 00:02:49.254 CXX test/cpp_headers/conf.o 00:02:49.254 CXX test/cpp_headers/cpuset.o 00:02:49.254 CXX test/cpp_headers/config.o 00:02:49.254 CXX test/cpp_headers/crc16.o 00:02:49.254 CXX test/cpp_headers/crc32.o 00:02:49.254 CC app/spdk_tgt/spdk_tgt.o 00:02:49.254 CXX test/cpp_headers/dma.o 00:02:49.254 CXX test/cpp_headers/dif.o 00:02:49.254 CXX test/cpp_headers/crc64.o 00:02:49.254 CXX test/cpp_headers/endian.o 00:02:49.254 CXX test/cpp_headers/event.o 00:02:49.254 CXX test/cpp_headers/env_dpdk.o 00:02:49.254 CXX test/cpp_headers/fd_group.o 00:02:49.254 CXX test/cpp_headers/env.o 00:02:49.254 CXX test/cpp_headers/fd.o 00:02:49.254 CXX test/cpp_headers/file.o 00:02:49.254 CXX test/cpp_headers/fsdev.o 00:02:49.254 CXX test/cpp_headers/fsdev_module.o 00:02:49.254 CXX test/cpp_headers/ftl.o 00:02:49.254 CXX test/cpp_headers/hexlify.o 00:02:49.254 CXX test/cpp_headers/gpt_spec.o 00:02:49.254 CXX test/cpp_headers/histogram_data.o 00:02:49.254 CXX test/cpp_headers/idxd_spec.o 00:02:49.254 CXX test/cpp_headers/idxd.o 00:02:49.254 CXX test/cpp_headers/ioat_spec.o 00:02:49.254 CXX test/cpp_headers/init.o 00:02:49.254 CXX test/cpp_headers/ioat.o 00:02:49.254 CXX test/cpp_headers/json.o 00:02:49.254 CXX test/cpp_headers/iscsi_spec.o 00:02:49.254 CXX test/cpp_headers/jsonrpc.o 00:02:49.254 CXX test/cpp_headers/keyring_module.o 00:02:49.254 CXX test/cpp_headers/keyring.o 00:02:49.254 CXX test/cpp_headers/likely.o 00:02:49.254 CXX test/cpp_headers/log.o 00:02:49.254 CXX test/cpp_headers/lvol.o 00:02:49.254 CXX test/cpp_headers/md5.o 00:02:49.254 CXX test/cpp_headers/memory.o 00:02:49.254 CXX test/cpp_headers/mmio.o 00:02:49.254 CXX test/cpp_headers/nbd.o 00:02:49.254 CXX test/cpp_headers/net.o 00:02:49.254 CXX test/cpp_headers/nvme.o 00:02:49.254 CXX test/cpp_headers/notify.o 00:02:49.254 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.254 CXX test/cpp_headers/nvme_intel.o 00:02:49.254 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.254 CXX test/cpp_headers/nvme_spec.o 00:02:49.254 CXX test/cpp_headers/nvme_zns.o 00:02:49.254 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.254 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.254 CXX test/cpp_headers/nvmf.o 00:02:49.254 CXX test/cpp_headers/nvmf_spec.o 00:02:49.254 CXX test/cpp_headers/nvmf_transport.o 00:02:49.254 CXX test/cpp_headers/opal.o 00:02:49.254 CXX test/cpp_headers/opal_spec.o 00:02:49.254 CXX test/cpp_headers/pci_ids.o 00:02:49.254 CXX test/cpp_headers/pipe.o 00:02:49.254 CXX test/cpp_headers/queue.o 00:02:49.513 CXX test/cpp_headers/reduce.o 00:02:49.513 CXX test/cpp_headers/rpc.o 00:02:49.513 CXX test/cpp_headers/scheduler.o 00:02:49.513 CXX test/cpp_headers/scsi.o 00:02:49.513 CXX test/cpp_headers/scsi_spec.o 00:02:49.513 CXX test/cpp_headers/stdinc.o 00:02:49.513 CXX test/cpp_headers/sock.o 00:02:49.513 CXX test/cpp_headers/string.o 00:02:49.513 CXX test/cpp_headers/thread.o 00:02:49.513 CXX test/cpp_headers/trace.o 00:02:49.513 CXX test/cpp_headers/trace_parser.o 00:02:49.513 CXX test/cpp_headers/tree.o 00:02:49.513 CC examples/util/zipf/zipf.o 00:02:49.513 CC examples/ioat/verify/verify.o 00:02:49.513 CXX test/cpp_headers/ublk.o 00:02:49.513 CC test/env/vtophys/vtophys.o 00:02:49.513 CC examples/ioat/perf/perf.o 00:02:49.513 CC test/env/pci/pci_ut.o 00:02:49.513 CC test/app/histogram_perf/histogram_perf.o 00:02:49.513 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:49.513 CC test/thread/poller_perf/poller_perf.o 00:02:49.513 CC test/thread/lock/spdk_lock.o 00:02:49.513 CC test/app/stub/stub.o 00:02:49.514 LINK spdk_lspci 00:02:49.514 CC test/env/memory/memory_ut.o 00:02:49.514 CC app/fio/nvme/fio_plugin.o 00:02:49.514 CC test/app/jsoncat/jsoncat.o 00:02:49.514 CC test/dma/test_dma/test_dma.o 00:02:49.514 LINK rpc_client_test 00:02:49.514 CC app/fio/bdev/fio_plugin.o 00:02:49.514 CC test/app/bdev_svc/bdev_svc.o 00:02:49.514 LINK spdk_trace_record 00:02:49.514 LINK spdk_nvme_discover 00:02:49.514 CXX test/cpp_headers/util.o 00:02:49.514 LINK nvmf_tgt 00:02:49.514 CXX test/cpp_headers/uuid.o 00:02:49.514 CXX test/cpp_headers/version.o 00:02:49.514 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.514 CXX test/cpp_headers/vhost.o 00:02:49.514 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.514 CXX test/cpp_headers/vmd.o 00:02:49.514 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:49.514 CXX test/cpp_headers/xor.o 00:02:49.514 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:49.514 CXX test/cpp_headers/zipf.o 00:02:49.514 CC test/env/mem_callbacks/mem_callbacks.o 00:02:49.514 LINK interrupt_tgt 00:02:49.514 LINK iscsi_tgt 00:02:49.514 LINK vtophys 00:02:49.514 LINK zipf 00:02:49.773 LINK histogram_perf 00:02:49.773 LINK jsoncat 00:02:49.773 LINK poller_perf 00:02:49.773 LINK env_dpdk_post_init 00:02:49.773 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:49.773 LINK spdk_tgt 00:02:49.773 LINK stub 00:02:49.773 LINK ioat_perf 00:02:49.773 LINK verify 00:02:49.773 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:49.773 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:49.773 LINK spdk_trace 00:02:49.773 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:49.773 LINK bdev_svc 00:02:49.773 LINK spdk_dd 00:02:49.773 LINK nvme_fuzz 00:02:50.032 LINK pci_ut 00:02:50.032 LINK test_dma 00:02:50.032 LINK spdk_nvme_identify 00:02:50.032 LINK spdk_nvme 00:02:50.032 LINK llvm_vfio_fuzz 00:02:50.032 LINK vhost_fuzz 00:02:50.032 LINK mem_callbacks 00:02:50.032 LINK spdk_bdev 00:02:50.032 LINK spdk_top 00:02:50.032 LINK spdk_nvme_perf 00:02:50.032 LINK llvm_nvme_fuzz 00:02:50.290 CC app/vhost/vhost.o 00:02:50.290 CC examples/sock/hello_world/hello_sock.o 00:02:50.290 CC examples/idxd/perf/perf.o 00:02:50.290 CC examples/vmd/lsvmd/lsvmd.o 00:02:50.290 CC examples/vmd/led/led.o 00:02:50.290 CC examples/thread/thread/thread_ex.o 00:02:50.290 LINK led 00:02:50.290 LINK lsvmd 00:02:50.290 LINK vhost 00:02:50.549 LINK memory_ut 00:02:50.549 LINK hello_sock 00:02:50.549 LINK spdk_lock 00:02:50.549 LINK thread 00:02:50.549 LINK idxd_perf 00:02:50.549 LINK iscsi_fuzz 00:02:51.115 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:51.115 CC examples/nvme/hello_world/hello_world.o 00:02:51.115 CC examples/nvme/hotplug/hotplug.o 00:02:51.115 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:51.115 CC examples/nvme/abort/abort.o 00:02:51.115 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:51.115 CC examples/nvme/reconnect/reconnect.o 00:02:51.115 CC examples/nvme/arbitration/arbitration.o 00:02:51.115 CC test/event/reactor/reactor.o 00:02:51.115 CC test/event/reactor_perf/reactor_perf.o 00:02:51.115 CC test/event/event_perf/event_perf.o 00:02:51.115 CC test/event/app_repeat/app_repeat.o 00:02:51.115 CC test/event/scheduler/scheduler.o 00:02:51.374 LINK reactor_perf 00:02:51.374 LINK reactor 00:02:51.374 LINK event_perf 00:02:51.374 LINK app_repeat 00:02:51.374 LINK pmr_persistence 00:02:51.374 LINK hello_world 00:02:51.374 LINK cmb_copy 00:02:51.374 LINK hotplug 00:02:51.374 LINK reconnect 00:02:51.374 LINK abort 00:02:51.374 LINK scheduler 00:02:51.374 LINK arbitration 00:02:51.374 LINK nvme_manage 00:02:51.632 CC test/nvme/reset/reset.o 00:02:51.632 CC test/nvme/compliance/nvme_compliance.o 00:02:51.632 CC test/nvme/boot_partition/boot_partition.o 00:02:51.632 CC test/nvme/simple_copy/simple_copy.o 00:02:51.632 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:51.632 CC test/nvme/aer/aer.o 00:02:51.632 CC test/nvme/e2edp/nvme_dp.o 00:02:51.632 CC test/nvme/overhead/overhead.o 00:02:51.632 CC test/nvme/connect_stress/connect_stress.o 00:02:51.632 CC test/nvme/cuse/cuse.o 00:02:51.632 CC test/nvme/err_injection/err_injection.o 00:02:51.632 CC test/nvme/reserve/reserve.o 00:02:51.632 CC test/nvme/fdp/fdp.o 00:02:51.632 CC test/nvme/sgl/sgl.o 00:02:51.632 CC test/nvme/startup/startup.o 00:02:51.632 CC test/accel/dif/dif.o 00:02:51.632 CC test/nvme/fused_ordering/fused_ordering.o 00:02:51.632 CC test/blobfs/mkfs/mkfs.o 00:02:51.889 LINK doorbell_aers 00:02:51.889 LINK boot_partition 00:02:51.889 LINK connect_stress 00:02:51.889 CC test/lvol/esnap/esnap.o 00:02:51.889 LINK err_injection 00:02:51.889 LINK simple_copy 00:02:51.889 LINK startup 00:02:51.889 LINK fused_ordering 00:02:51.889 LINK reserve 00:02:51.889 LINK reset 00:02:51.889 LINK aer 00:02:51.889 LINK nvme_dp 00:02:51.889 LINK overhead 00:02:51.889 LINK fdp 00:02:51.889 LINK sgl 00:02:51.889 LINK mkfs 00:02:51.889 LINK nvme_compliance 00:02:52.148 LINK dif 00:02:52.148 CC examples/accel/perf/accel_perf.o 00:02:52.148 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:52.148 CC examples/blob/cli/blobcli.o 00:02:52.148 CC examples/blob/hello_world/hello_blob.o 00:02:52.407 LINK hello_blob 00:02:52.407 LINK hello_fsdev 00:02:52.407 LINK accel_perf 00:02:52.407 LINK blobcli 00:02:52.407 LINK cuse 00:02:53.343 CC examples/bdev/hello_world/hello_bdev.o 00:02:53.343 CC examples/bdev/bdevperf/bdevperf.o 00:02:53.343 LINK hello_bdev 00:02:53.601 CC test/bdev/bdevio/bdevio.o 00:02:53.860 LINK bdevperf 00:02:54.120 LINK bdevio 00:02:55.059 LINK esnap 00:02:55.318 CC examples/nvmf/nvmf/nvmf.o 00:02:55.578 LINK nvmf 00:02:56.959 00:02:56.959 real 0m48.115s 00:02:56.959 user 6m20.720s 00:02:56.959 sys 2m43.995s 00:02:56.959 10:05:10 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:56.959 10:05:10 make -- common/autotest_common.sh@10 -- $ set +x 00:02:56.959 ************************************ 00:02:56.959 END TEST make 00:02:56.959 ************************************ 00:02:56.959 10:05:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:56.959 10:05:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:56.959 10:05:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:56.959 10:05:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.959 10:05:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:56.959 10:05:10 -- pm/common@44 -- $ pid=340346 00:02:56.959 10:05:10 -- pm/common@50 -- $ kill -TERM 340346 00:02:56.959 10:05:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.959 10:05:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:56.959 10:05:10 -- pm/common@44 -- $ pid=340348 00:02:56.959 10:05:10 -- pm/common@50 -- $ kill -TERM 340348 00:02:56.959 10:05:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.959 10:05:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:56.959 10:05:10 -- pm/common@44 -- $ pid=340350 00:02:56.959 10:05:10 -- pm/common@50 -- $ kill -TERM 340350 00:02:56.959 10:05:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.959 10:05:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:56.959 10:05:10 -- pm/common@44 -- $ pid=340376 00:02:56.959 10:05:10 -- pm/common@50 -- $ sudo -E kill -TERM 340376 00:02:56.959 10:05:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:56.959 10:05:10 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:56.959 10:05:10 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:56.959 10:05:10 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:56.959 10:05:10 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:56.959 10:05:10 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:56.959 10:05:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:56.959 10:05:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:56.959 10:05:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:56.959 10:05:10 -- scripts/common.sh@336 -- # IFS=.-: 00:02:56.959 10:05:10 -- scripts/common.sh@336 -- # read -ra ver1 00:02:56.959 10:05:10 -- scripts/common.sh@337 -- # IFS=.-: 00:02:56.959 10:05:10 -- scripts/common.sh@337 -- # read -ra ver2 00:02:56.959 10:05:10 -- scripts/common.sh@338 -- # local 'op=<' 00:02:56.959 10:05:10 -- scripts/common.sh@340 -- # ver1_l=2 00:02:56.959 10:05:10 -- scripts/common.sh@341 -- # ver2_l=1 00:02:56.959 10:05:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:56.959 10:05:10 -- scripts/common.sh@344 -- # case "$op" in 00:02:56.959 10:05:10 -- scripts/common.sh@345 -- # : 1 00:02:56.959 10:05:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:56.959 10:05:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:56.959 10:05:10 -- scripts/common.sh@365 -- # decimal 1 00:02:56.959 10:05:10 -- scripts/common.sh@353 -- # local d=1 00:02:56.959 10:05:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:56.959 10:05:10 -- scripts/common.sh@355 -- # echo 1 00:02:56.959 10:05:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:56.959 10:05:10 -- scripts/common.sh@366 -- # decimal 2 00:02:56.959 10:05:10 -- scripts/common.sh@353 -- # local d=2 00:02:56.959 10:05:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:56.959 10:05:10 -- scripts/common.sh@355 -- # echo 2 00:02:56.959 10:05:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:56.959 10:05:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:56.959 10:05:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:56.959 10:05:10 -- scripts/common.sh@368 -- # return 0 00:02:56.959 10:05:10 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:56.959 10:05:10 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:56.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.959 --rc genhtml_branch_coverage=1 00:02:56.959 --rc genhtml_function_coverage=1 00:02:56.959 --rc genhtml_legend=1 00:02:56.959 --rc geninfo_all_blocks=1 00:02:56.959 --rc geninfo_unexecuted_blocks=1 00:02:56.959 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:56.959 ' 00:02:56.959 10:05:10 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:56.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.959 --rc genhtml_branch_coverage=1 00:02:56.959 --rc genhtml_function_coverage=1 00:02:56.959 --rc genhtml_legend=1 00:02:56.959 --rc geninfo_all_blocks=1 00:02:56.959 --rc geninfo_unexecuted_blocks=1 00:02:56.959 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:56.959 ' 00:02:56.959 10:05:10 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:56.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.959 --rc genhtml_branch_coverage=1 00:02:56.959 --rc genhtml_function_coverage=1 00:02:56.959 --rc genhtml_legend=1 00:02:56.959 --rc geninfo_all_blocks=1 00:02:56.959 --rc geninfo_unexecuted_blocks=1 00:02:56.959 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:56.959 ' 00:02:56.959 10:05:10 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:56.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.960 --rc genhtml_branch_coverage=1 00:02:56.960 --rc genhtml_function_coverage=1 00:02:56.960 --rc genhtml_legend=1 00:02:56.960 --rc geninfo_all_blocks=1 00:02:56.960 --rc geninfo_unexecuted_blocks=1 00:02:56.960 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:56.960 ' 00:02:56.960 10:05:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:56.960 10:05:10 -- nvmf/common.sh@7 -- # uname -s 00:02:56.960 10:05:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:56.960 10:05:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:56.960 10:05:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:56.960 10:05:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:56.960 10:05:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:56.960 10:05:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:56.960 10:05:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:56.960 10:05:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:56.960 10:05:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:56.960 10:05:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:56.960 10:05:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:56.960 10:05:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:56.960 10:05:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:56.960 10:05:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:56.960 10:05:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:56.960 10:05:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:56.960 10:05:10 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:56.960 10:05:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:56.960 10:05:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:56.960 10:05:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.960 10:05:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.960 10:05:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.960 10:05:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.960 10:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.960 10:05:10 -- paths/export.sh@5 -- # export PATH 00:02:56.960 10:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.960 10:05:10 -- nvmf/common.sh@51 -- # : 0 00:02:56.960 10:05:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:56.960 10:05:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:56.960 10:05:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:56.960 10:05:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:56.960 10:05:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:56.960 10:05:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:56.960 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:56.960 10:05:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:56.960 10:05:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:56.960 10:05:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:56.960 10:05:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:56.960 10:05:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:56.960 10:05:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:56.960 10:05:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:56.960 10:05:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:57.220 10:05:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:57.220 10:05:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:57.220 10:05:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:57.220 10:05:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:57.220 10:05:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:57.220 10:05:10 -- spdk/autotest.sh@48 -- # udevadm_pid=405176 00:02:57.220 10:05:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:57.220 10:05:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:57.220 10:05:10 -- pm/common@17 -- # local monitor 00:02:57.220 10:05:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.220 10:05:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.220 10:05:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.220 10:05:10 -- pm/common@21 -- # date +%s 00:02:57.220 10:05:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.220 10:05:10 -- pm/common@21 -- # date +%s 00:02:57.220 10:05:10 -- pm/common@25 -- # sleep 1 00:02:57.220 10:05:10 -- pm/common@21 -- # date +%s 00:02:57.220 10:05:10 -- pm/common@21 -- # date +%s 00:02:57.220 10:05:10 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994310 00:02:57.220 10:05:10 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994310 00:02:57.220 10:05:10 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994310 00:02:57.220 10:05:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733994310 00:02:57.220 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994310_collect-vmstat.pm.log 00:02:57.221 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994310_collect-cpu-load.pm.log 00:02:57.221 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994310_collect-cpu-temp.pm.log 00:02:57.221 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733994310_collect-bmc-pm.bmc.pm.log 00:02:58.159 10:05:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:58.159 10:05:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:58.159 10:05:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:58.159 10:05:11 -- common/autotest_common.sh@10 -- # set +x 00:02:58.159 10:05:11 -- spdk/autotest.sh@59 -- # create_test_list 00:02:58.159 10:05:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:58.159 10:05:11 -- common/autotest_common.sh@10 -- # set +x 00:02:58.159 10:05:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:58.159 10:05:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:58.159 10:05:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:58.159 10:05:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:58.159 10:05:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:58.159 10:05:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:58.159 10:05:11 -- common/autotest_common.sh@1457 -- # uname 00:02:58.159 10:05:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:58.159 10:05:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:58.159 10:05:11 -- common/autotest_common.sh@1477 -- # uname 00:02:58.159 10:05:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:58.159 10:05:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:58.159 10:05:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:58.159 lcov: LCOV version 1.15 00:02:58.159 10:05:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:03:06.414 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:10.697 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:03:13.987 10:05:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:13.987 10:05:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:13.987 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:03:13.987 10:05:27 -- spdk/autotest.sh@78 -- # rm -f 00:03:13.987 10:05:27 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.279 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:17.279 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:17.279 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:17.279 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:17.279 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:17.538 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:17.798 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:17.798 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:17.798 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:17.798 10:05:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:17.798 10:05:31 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:17.798 10:05:31 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:17.798 10:05:31 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:17.798 10:05:31 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:17.798 10:05:31 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:17.798 10:05:31 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:17.798 10:05:31 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:17.798 10:05:31 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:17.798 10:05:31 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:17.798 10:05:31 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:17.798 10:05:31 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.798 10:05:31 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:17.798 10:05:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:17.798 10:05:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:17.798 10:05:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:17.798 10:05:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:17.798 10:05:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:17.798 10:05:31 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:17.798 No valid GPT data, bailing 00:03:17.798 10:05:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.798 10:05:31 -- scripts/common.sh@394 -- # pt= 00:03:17.798 10:05:31 -- scripts/common.sh@395 -- # return 1 00:03:17.798 10:05:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:17.798 1+0 records in 00:03:17.798 1+0 records out 00:03:17.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467131 s, 224 MB/s 00:03:17.798 10:05:31 -- spdk/autotest.sh@105 -- # sync 00:03:17.798 10:05:31 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:17.798 10:05:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:17.798 10:05:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.927 10:05:38 -- spdk/autotest.sh@111 -- # uname -s 00:03:25.927 10:05:38 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:25.927 10:05:38 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:03:25.927 10:05:38 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.927 10:05:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.927 10:05:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.927 10:05:38 -- common/autotest_common.sh@10 -- # set +x 00:03:25.927 ************************************ 00:03:25.927 START TEST setup.sh 00:03:25.927 ************************************ 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.927 * Looking for test storage... 00:03:25.927 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1711 -- # lcov --version 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@345 -- # : 1 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@353 -- # local d=1 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@355 -- # echo 1 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@353 -- # local d=2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@355 -- # echo 2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.927 10:05:38 setup.sh -- scripts/common.sh@368 -- # return 0 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.927 --rc genhtml_function_coverage=1 00:03:25.927 --rc genhtml_legend=1 00:03:25.927 --rc geninfo_all_blocks=1 00:03:25.927 --rc geninfo_unexecuted_blocks=1 00:03:25.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.927 ' 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.927 --rc genhtml_function_coverage=1 00:03:25.927 --rc genhtml_legend=1 00:03:25.927 --rc geninfo_all_blocks=1 00:03:25.927 --rc geninfo_unexecuted_blocks=1 00:03:25.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.927 ' 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.927 --rc genhtml_function_coverage=1 00:03:25.927 --rc genhtml_legend=1 00:03:25.927 --rc geninfo_all_blocks=1 00:03:25.927 --rc geninfo_unexecuted_blocks=1 00:03:25.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.927 ' 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.927 --rc genhtml_function_coverage=1 00:03:25.927 --rc genhtml_legend=1 00:03:25.927 --rc geninfo_all_blocks=1 00:03:25.927 --rc geninfo_unexecuted_blocks=1 00:03:25.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.927 ' 00:03:25.927 10:05:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:25.927 10:05:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:25.927 10:05:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.927 10:05:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.927 ************************************ 00:03:25.927 START TEST acl 00:03:25.927 ************************************ 00:03:25.927 10:05:38 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:25.927 * Looking for test storage... 00:03:25.927 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1711 -- # lcov --version 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.927 10:05:39 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.927 --rc genhtml_function_coverage=1 00:03:25.927 --rc genhtml_legend=1 00:03:25.927 --rc geninfo_all_blocks=1 00:03:25.927 --rc geninfo_unexecuted_blocks=1 00:03:25.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.927 ' 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.927 --rc genhtml_function_coverage=1 00:03:25.927 --rc genhtml_legend=1 00:03:25.927 --rc geninfo_all_blocks=1 00:03:25.927 --rc geninfo_unexecuted_blocks=1 00:03:25.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.927 ' 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.927 --rc genhtml_function_coverage=1 00:03:25.927 --rc genhtml_legend=1 00:03:25.927 --rc geninfo_all_blocks=1 00:03:25.927 --rc geninfo_unexecuted_blocks=1 00:03:25.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.927 ' 00:03:25.927 10:05:39 setup.sh.acl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:25.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.927 --rc genhtml_branch_coverage=1 00:03:25.928 --rc genhtml_function_coverage=1 00:03:25.928 --rc genhtml_legend=1 00:03:25.928 --rc geninfo_all_blocks=1 00:03:25.928 --rc geninfo_unexecuted_blocks=1 00:03:25.928 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:25.928 ' 00:03:25.928 10:05:39 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.928 10:05:39 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:25.928 10:05:39 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:25.928 10:05:39 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:25.928 10:05:39 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:25.928 10:05:39 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:25.928 10:05:39 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:25.928 10:05:39 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.928 10:05:39 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.124 10:05:43 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:30.124 10:05:43 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:30.124 10:05:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.124 10:05:43 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:30.124 10:05:43 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.124 10:05:43 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:33.462 Hugepages 00:03:33.462 node hugesize free / total 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.462 00:03:33.462 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.462 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:33.463 10:05:46 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:33.463 10:05:46 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.463 10:05:46 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.463 10:05:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.463 ************************************ 00:03:33.463 START TEST denied 00:03:33.463 ************************************ 00:03:33.463 10:05:46 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:03:33.463 10:05:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:33.463 10:05:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:33.463 10:05:46 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:33.463 10:05:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.463 10:05:46 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:36.752 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.752 10:05:50 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.031 00:03:42.031 real 0m8.443s 00:03:42.031 user 0m2.801s 00:03:42.031 sys 0m4.958s 00:03:42.031 10:05:55 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.031 10:05:55 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:42.031 ************************************ 00:03:42.031 END TEST denied 00:03:42.031 ************************************ 00:03:42.031 10:05:55 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:42.031 10:05:55 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.031 10:05:55 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.031 10:05:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.031 ************************************ 00:03:42.031 START TEST allowed 00:03:42.031 ************************************ 00:03:42.031 10:05:55 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:03:42.031 10:05:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:42.031 10:05:55 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:42.031 10:05:55 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:42.031 10:05:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.031 10:05:55 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:47.309 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.309 10:06:00 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:47.309 10:06:00 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:47.309 10:06:00 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:47.309 10:06:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.309 10:06:00 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.505 00:03:51.505 real 0m9.066s 00:03:51.505 user 0m2.582s 00:03:51.505 sys 0m5.075s 00:03:51.505 10:06:04 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.505 10:06:04 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:51.505 ************************************ 00:03:51.505 END TEST allowed 00:03:51.505 ************************************ 00:03:51.505 00:03:51.505 real 0m25.311s 00:03:51.505 user 0m8.232s 00:03:51.505 sys 0m15.288s 00:03:51.505 10:06:04 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.505 10:06:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.505 ************************************ 00:03:51.505 END TEST acl 00:03:51.505 ************************************ 00:03:51.505 10:06:04 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.505 10:06:04 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.505 10:06:04 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.505 10:06:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.505 ************************************ 00:03:51.505 START TEST hugepages 00:03:51.505 ************************************ 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:51.505 * Looking for test storage... 00:03:51.505 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.505 10:06:04 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.505 --rc genhtml_branch_coverage=1 00:03:51.505 --rc genhtml_function_coverage=1 00:03:51.505 --rc genhtml_legend=1 00:03:51.505 --rc geninfo_all_blocks=1 00:03:51.505 --rc geninfo_unexecuted_blocks=1 00:03:51.505 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:51.505 ' 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.505 --rc genhtml_branch_coverage=1 00:03:51.505 --rc genhtml_function_coverage=1 00:03:51.505 --rc genhtml_legend=1 00:03:51.505 --rc geninfo_all_blocks=1 00:03:51.505 --rc geninfo_unexecuted_blocks=1 00:03:51.505 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:51.505 ' 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.505 --rc genhtml_branch_coverage=1 00:03:51.505 --rc genhtml_function_coverage=1 00:03:51.505 --rc genhtml_legend=1 00:03:51.505 --rc geninfo_all_blocks=1 00:03:51.505 --rc geninfo_unexecuted_blocks=1 00:03:51.505 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:51.505 ' 00:03:51.505 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.505 --rc genhtml_branch_coverage=1 00:03:51.505 --rc genhtml_function_coverage=1 00:03:51.505 --rc genhtml_legend=1 00:03:51.505 --rc geninfo_all_blocks=1 00:03:51.505 --rc geninfo_unexecuted_blocks=1 00:03:51.505 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:51.505 ' 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 40312872 kB' 'MemAvailable: 44031228 kB' 'Buffers: 9316 kB' 'Cached: 11543208 kB' 'SwapCached: 0 kB' 'Active: 8453380 kB' 'Inactive: 3688888 kB' 'Active(anon): 8036788 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593148 kB' 'Mapped: 150720 kB' 'Shmem: 7447044 kB' 'KReclaimable: 219484 kB' 'Slab: 886288 kB' 'SReclaimable: 219484 kB' 'SUnreclaim: 666804 kB' 'KernelStack: 21888 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433356 kB' 'Committed_AS: 9319044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.505 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.506 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:51.507 10:06:04 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:51.507 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.507 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.507 10:06:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.507 ************************************ 00:03:51.507 START TEST single_node_setup 00:03:51.507 ************************************ 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:51.507 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.508 10:06:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:54.800 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.800 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.181 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.181 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:56.181 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:56.181 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:56.181 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:56.181 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:56.181 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:56.181 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42551528 kB' 'MemAvailable: 46269244 kB' 'Buffers: 9316 kB' 'Cached: 11543352 kB' 'SwapCached: 0 kB' 'Active: 8450516 kB' 'Inactive: 3688888 kB' 'Active(anon): 8033924 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589656 kB' 'Mapped: 150116 kB' 'Shmem: 7447188 kB' 'KReclaimable: 218204 kB' 'Slab: 884004 kB' 'SReclaimable: 218204 kB' 'SUnreclaim: 665800 kB' 'KernelStack: 21920 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9312128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214256 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.447 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.448 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42555748 kB' 'MemAvailable: 46273464 kB' 'Buffers: 9316 kB' 'Cached: 11543356 kB' 'SwapCached: 0 kB' 'Active: 8448756 kB' 'Inactive: 3688888 kB' 'Active(anon): 8032164 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588272 kB' 'Mapped: 149996 kB' 'Shmem: 7447192 kB' 'KReclaimable: 218204 kB' 'Slab: 883960 kB' 'SReclaimable: 218204 kB' 'SUnreclaim: 665756 kB' 'KernelStack: 21792 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9312148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.449 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.450 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42556408 kB' 'MemAvailable: 46274124 kB' 'Buffers: 9316 kB' 'Cached: 11543356 kB' 'SwapCached: 0 kB' 'Active: 8448960 kB' 'Inactive: 3688888 kB' 'Active(anon): 8032368 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588500 kB' 'Mapped: 149996 kB' 'Shmem: 7447192 kB' 'KReclaimable: 218204 kB' 'Slab: 883960 kB' 'SReclaimable: 218204 kB' 'SUnreclaim: 665756 kB' 'KernelStack: 21968 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9312320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214240 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.451 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.452 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:56.453 nr_hugepages=1024 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:56.453 resv_hugepages=0 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:56.453 surplus_hugepages=0 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:56.453 anon_hugepages=0 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42556484 kB' 'MemAvailable: 46274200 kB' 'Buffers: 9316 kB' 'Cached: 11543356 kB' 'SwapCached: 0 kB' 'Active: 8448996 kB' 'Inactive: 3688888 kB' 'Active(anon): 8032404 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588528 kB' 'Mapped: 149972 kB' 'Shmem: 7447192 kB' 'KReclaimable: 218204 kB' 'Slab: 883952 kB' 'SReclaimable: 218204 kB' 'SUnreclaim: 665748 kB' 'KernelStack: 21824 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9312196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214240 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:03:56.453 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.454 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.455 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26023520 kB' 'MemUsed: 6561848 kB' 'SwapCached: 0 kB' 'Active: 3349148 kB' 'Inactive: 102080 kB' 'Active(anon): 3057276 kB' 'Inactive(anon): 0 kB' 'Active(file): 291872 kB' 'Inactive(file): 102080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3012672 kB' 'Mapped: 125764 kB' 'AnonPages: 441744 kB' 'Shmem: 2618720 kB' 'KernelStack: 12888 kB' 'PageTables: 5604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128180 kB' 'Slab: 425752 kB' 'SReclaimable: 128180 kB' 'SUnreclaim: 297572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.456 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:56.457 node0=1024 expecting 1024 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.457 00:03:56.457 real 0m5.277s 00:03:56.457 user 0m1.437s 00:03:56.457 sys 0m2.442s 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.457 10:06:09 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:56.457 ************************************ 00:03:56.457 END TEST single_node_setup 00:03:56.457 ************************************ 00:03:56.457 10:06:10 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:56.457 10:06:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.457 10:06:10 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.457 10:06:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.457 ************************************ 00:03:56.457 START TEST even_2G_alloc 00:03:56.457 ************************************ 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.457 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:56.458 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:56.458 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:56.458 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.458 10:06:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:00.663 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:00.663 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.663 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42571472 kB' 'MemAvailable: 46289152 kB' 'Buffers: 9316 kB' 'Cached: 11543520 kB' 'SwapCached: 0 kB' 'Active: 8448168 kB' 'Inactive: 3688888 kB' 'Active(anon): 8031576 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587504 kB' 'Mapped: 149036 kB' 'Shmem: 7447356 kB' 'KReclaimable: 218132 kB' 'Slab: 883988 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 665856 kB' 'KernelStack: 21696 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9303308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214288 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.664 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42573812 kB' 'MemAvailable: 46291492 kB' 'Buffers: 9316 kB' 'Cached: 11543524 kB' 'SwapCached: 0 kB' 'Active: 8447920 kB' 'Inactive: 3688888 kB' 'Active(anon): 8031328 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587296 kB' 'Mapped: 149016 kB' 'Shmem: 7447360 kB' 'KReclaimable: 218132 kB' 'Slab: 884108 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 665976 kB' 'KernelStack: 21712 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9303328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214240 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.665 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.666 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42574612 kB' 'MemAvailable: 46292292 kB' 'Buffers: 9316 kB' 'Cached: 11543540 kB' 'SwapCached: 0 kB' 'Active: 8447928 kB' 'Inactive: 3688888 kB' 'Active(anon): 8031336 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587300 kB' 'Mapped: 149016 kB' 'Shmem: 7447376 kB' 'KReclaimable: 218132 kB' 'Slab: 884108 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 665976 kB' 'KernelStack: 21712 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9303348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214240 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.667 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.668 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:00.669 nr_hugepages=1024 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:00.669 resv_hugepages=0 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:00.669 surplus_hugepages=0 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:00.669 anon_hugepages=0 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.669 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42574364 kB' 'MemAvailable: 46292044 kB' 'Buffers: 9316 kB' 'Cached: 11543564 kB' 'SwapCached: 0 kB' 'Active: 8447940 kB' 'Inactive: 3688888 kB' 'Active(anon): 8031348 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587236 kB' 'Mapped: 149016 kB' 'Shmem: 7447400 kB' 'KReclaimable: 218132 kB' 'Slab: 884108 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 665976 kB' 'KernelStack: 21696 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9303372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214240 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.670 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:00.671 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27086428 kB' 'MemUsed: 5498940 kB' 'SwapCached: 0 kB' 'Active: 3346504 kB' 'Inactive: 102080 kB' 'Active(anon): 3054632 kB' 'Inactive(anon): 0 kB' 'Active(file): 291872 kB' 'Inactive(file): 102080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3012764 kB' 'Mapped: 125316 kB' 'AnonPages: 438996 kB' 'Shmem: 2618812 kB' 'KernelStack: 12664 kB' 'PageTables: 5356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128172 kB' 'Slab: 425916 kB' 'SReclaimable: 128172 kB' 'SUnreclaim: 297744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.672 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.673 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698440 kB' 'MemFree: 15489256 kB' 'MemUsed: 12209184 kB' 'SwapCached: 0 kB' 'Active: 5101116 kB' 'Inactive: 3586808 kB' 'Active(anon): 4976396 kB' 'Inactive(anon): 0 kB' 'Active(file): 124720 kB' 'Inactive(file): 3586808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8540152 kB' 'Mapped: 23700 kB' 'AnonPages: 147876 kB' 'Shmem: 4828624 kB' 'KernelStack: 9032 kB' 'PageTables: 2208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89960 kB' 'Slab: 458192 kB' 'SReclaimable: 89960 kB' 'SUnreclaim: 368232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.674 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:00.675 node0=512 expecting 512 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:00.675 node1=512 expecting 512 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:00.675 00:04:00.675 real 0m3.703s 00:04:00.675 user 0m1.375s 00:04:00.675 sys 0m2.396s 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.675 10:06:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.675 ************************************ 00:04:00.675 END TEST even_2G_alloc 00:04:00.675 ************************************ 00:04:00.675 10:06:13 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:00.675 10:06:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.675 10:06:13 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.675 10:06:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.675 ************************************ 00:04:00.675 START TEST odd_alloc 00:04:00.675 ************************************ 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.675 10:06:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:03.979 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.979 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42581116 kB' 'MemAvailable: 46298796 kB' 'Buffers: 9316 kB' 'Cached: 11543692 kB' 'SwapCached: 0 kB' 'Active: 8454532 kB' 'Inactive: 3688888 kB' 'Active(anon): 8037940 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593672 kB' 'Mapped: 149592 kB' 'Shmem: 7447528 kB' 'KReclaimable: 218132 kB' 'Slab: 884516 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 666384 kB' 'KernelStack: 21712 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480908 kB' 'Committed_AS: 9310108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214244 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.979 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.980 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42580920 kB' 'MemAvailable: 46298600 kB' 'Buffers: 9316 kB' 'Cached: 11543696 kB' 'SwapCached: 0 kB' 'Active: 8448764 kB' 'Inactive: 3688888 kB' 'Active(anon): 8032172 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587948 kB' 'Mapped: 149296 kB' 'Shmem: 7447532 kB' 'KReclaimable: 218132 kB' 'Slab: 884524 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 666392 kB' 'KernelStack: 21712 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480908 kB' 'Committed_AS: 9305024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.981 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.982 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42577328 kB' 'MemAvailable: 46295008 kB' 'Buffers: 9316 kB' 'Cached: 11543712 kB' 'SwapCached: 0 kB' 'Active: 8452268 kB' 'Inactive: 3688888 kB' 'Active(anon): 8035676 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591400 kB' 'Mapped: 149520 kB' 'Shmem: 7447548 kB' 'KReclaimable: 218132 kB' 'Slab: 884524 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 666392 kB' 'KernelStack: 21712 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480908 kB' 'Committed_AS: 9308156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214192 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.983 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.984 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:03.985 nr_hugepages=1025 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:03.985 resv_hugepages=0 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:03.985 surplus_hugepages=0 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:03.985 anon_hugepages=0 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42573924 kB' 'MemAvailable: 46291604 kB' 'Buffers: 9316 kB' 'Cached: 11543732 kB' 'SwapCached: 0 kB' 'Active: 8448624 kB' 'Inactive: 3688888 kB' 'Active(anon): 8032032 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587740 kB' 'Mapped: 149016 kB' 'Shmem: 7447568 kB' 'KReclaimable: 218132 kB' 'Slab: 884524 kB' 'SReclaimable: 218132 kB' 'SUnreclaim: 666392 kB' 'KernelStack: 21696 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480908 kB' 'Committed_AS: 9304048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214192 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.985 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.986 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27074444 kB' 'MemUsed: 5510924 kB' 'SwapCached: 0 kB' 'Active: 3347740 kB' 'Inactive: 102080 kB' 'Active(anon): 3055868 kB' 'Inactive(anon): 0 kB' 'Active(file): 291872 kB' 'Inactive(file): 102080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3012892 kB' 'Mapped: 125316 kB' 'AnonPages: 440088 kB' 'Shmem: 2618940 kB' 'KernelStack: 12680 kB' 'PageTables: 5368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128172 kB' 'Slab: 426136 kB' 'SReclaimable: 128172 kB' 'SUnreclaim: 297964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.987 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.988 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698440 kB' 'MemFree: 15499408 kB' 'MemUsed: 12199032 kB' 'SwapCached: 0 kB' 'Active: 5100844 kB' 'Inactive: 3586808 kB' 'Active(anon): 4976124 kB' 'Inactive(anon): 0 kB' 'Active(file): 124720 kB' 'Inactive(file): 3586808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8540176 kB' 'Mapped: 23700 kB' 'AnonPages: 147584 kB' 'Shmem: 4828648 kB' 'KernelStack: 9032 kB' 'PageTables: 2304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89960 kB' 'Slab: 458388 kB' 'SReclaimable: 89960 kB' 'SUnreclaim: 368428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.989 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:03.990 node0=513 expecting 513 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:03.990 node1=512 expecting 512 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:03.990 00:04:03.990 real 0m3.706s 00:04:03.990 user 0m1.462s 00:04:03.990 sys 0m2.313s 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.990 10:06:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.990 ************************************ 00:04:03.990 END TEST odd_alloc 00:04:03.990 ************************************ 00:04:03.990 10:06:17 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:03.990 10:06:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.990 10:06:17 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.990 10:06:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.250 ************************************ 00:04:04.250 START TEST custom_alloc 00:04:04.250 ************************************ 00:04:04.250 10:06:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:04:04.250 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:04.250 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:04.250 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:04.250 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:04.250 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.251 10:06:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:07.548 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.548 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 41531844 kB' 'MemAvailable: 45249492 kB' 'Buffers: 9316 kB' 'Cached: 11543876 kB' 'SwapCached: 0 kB' 'Active: 8449680 kB' 'Inactive: 3688888 kB' 'Active(anon): 8033088 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588536 kB' 'Mapped: 149160 kB' 'Shmem: 7447712 kB' 'KReclaimable: 218068 kB' 'Slab: 884568 kB' 'SReclaimable: 218068 kB' 'SUnreclaim: 666500 kB' 'KernelStack: 21744 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957644 kB' 'Committed_AS: 9307596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.548 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.549 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 41531680 kB' 'MemAvailable: 45249324 kB' 'Buffers: 9316 kB' 'Cached: 11543880 kB' 'SwapCached: 0 kB' 'Active: 8449812 kB' 'Inactive: 3688888 kB' 'Active(anon): 8033220 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588712 kB' 'Mapped: 149056 kB' 'Shmem: 7447716 kB' 'KReclaimable: 218060 kB' 'Slab: 884548 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666488 kB' 'KernelStack: 21792 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957644 kB' 'Committed_AS: 9307612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.550 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.814 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.815 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 41531588 kB' 'MemAvailable: 45249232 kB' 'Buffers: 9316 kB' 'Cached: 11543896 kB' 'SwapCached: 0 kB' 'Active: 8450376 kB' 'Inactive: 3688888 kB' 'Active(anon): 8033784 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589248 kB' 'Mapped: 149056 kB' 'Shmem: 7447732 kB' 'KReclaimable: 218060 kB' 'Slab: 884548 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666488 kB' 'KernelStack: 21760 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957644 kB' 'Committed_AS: 9304808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214304 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.816 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.817 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:07.818 nr_hugepages=1536 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:07.818 resv_hugepages=0 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:07.818 surplus_hugepages=0 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:07.818 anon_hugepages=0 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 41532132 kB' 'MemAvailable: 45249776 kB' 'Buffers: 9316 kB' 'Cached: 11543920 kB' 'SwapCached: 0 kB' 'Active: 8449720 kB' 'Inactive: 3688888 kB' 'Active(anon): 8033128 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588612 kB' 'Mapped: 149040 kB' 'Shmem: 7447756 kB' 'KReclaimable: 218060 kB' 'Slab: 884588 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666528 kB' 'KernelStack: 21696 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957644 kB' 'Committed_AS: 9308228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214256 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.818 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.819 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27088176 kB' 'MemUsed: 5497192 kB' 'SwapCached: 0 kB' 'Active: 3348424 kB' 'Inactive: 102080 kB' 'Active(anon): 3056552 kB' 'Inactive(anon): 0 kB' 'Active(file): 291872 kB' 'Inactive(file): 102080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3013060 kB' 'Mapped: 125340 kB' 'AnonPages: 440552 kB' 'Shmem: 2619108 kB' 'KernelStack: 12648 kB' 'PageTables: 5308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128164 kB' 'Slab: 426148 kB' 'SReclaimable: 128164 kB' 'SUnreclaim: 297984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.820 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.821 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698440 kB' 'MemFree: 14443732 kB' 'MemUsed: 13254708 kB' 'SwapCached: 0 kB' 'Active: 5101360 kB' 'Inactive: 3586808 kB' 'Active(anon): 4976640 kB' 'Inactive(anon): 0 kB' 'Active(file): 124720 kB' 'Inactive(file): 3586808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8540200 kB' 'Mapped: 23700 kB' 'AnonPages: 148128 kB' 'Shmem: 4828672 kB' 'KernelStack: 9064 kB' 'PageTables: 2296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89896 kB' 'Slab: 458440 kB' 'SReclaimable: 89896 kB' 'SUnreclaim: 368544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.822 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:07.823 node0=512 expecting 512 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:07.823 node1=1024 expecting 1024 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:07.823 00:04:07.823 real 0m3.721s 00:04:07.823 user 0m1.422s 00:04:07.823 sys 0m2.366s 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.823 10:06:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.823 ************************************ 00:04:07.823 END TEST custom_alloc 00:04:07.823 ************************************ 00:04:07.823 10:06:21 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:07.823 10:06:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.823 10:06:21 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.823 10:06:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.823 ************************************ 00:04:07.823 START TEST no_shrink_alloc 00:04:07.823 ************************************ 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.823 10:06:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:11.118 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:11.118 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:11.118 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:11.118 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:11.118 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:11.381 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.381 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42601836 kB' 'MemAvailable: 46319480 kB' 'Buffers: 9316 kB' 'Cached: 11544048 kB' 'SwapCached: 0 kB' 'Active: 8450304 kB' 'Inactive: 3688888 kB' 'Active(anon): 8033712 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589020 kB' 'Mapped: 149088 kB' 'Shmem: 7447884 kB' 'KReclaimable: 218060 kB' 'Slab: 884124 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666064 kB' 'KernelStack: 21696 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9305656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214256 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.382 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42602616 kB' 'MemAvailable: 46320260 kB' 'Buffers: 9316 kB' 'Cached: 11544048 kB' 'SwapCached: 0 kB' 'Active: 8451184 kB' 'Inactive: 3688888 kB' 'Active(anon): 8034592 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589928 kB' 'Mapped: 149048 kB' 'Shmem: 7447884 kB' 'KReclaimable: 218060 kB' 'Slab: 884180 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666120 kB' 'KernelStack: 21712 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9305672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214256 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.383 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.384 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42603152 kB' 'MemAvailable: 46320796 kB' 'Buffers: 9316 kB' 'Cached: 11544068 kB' 'SwapCached: 0 kB' 'Active: 8450604 kB' 'Inactive: 3688888 kB' 'Active(anon): 8034012 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589276 kB' 'Mapped: 149048 kB' 'Shmem: 7447904 kB' 'KReclaimable: 218060 kB' 'Slab: 884180 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666120 kB' 'KernelStack: 21696 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9305696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214256 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.385 10:06:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.385 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.385 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.386 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.387 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.649 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:11.650 nr_hugepages=1024 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:11.650 resv_hugepages=0 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:11.650 surplus_hugepages=0 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:11.650 anon_hugepages=0 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42604732 kB' 'MemAvailable: 46322376 kB' 'Buffers: 9316 kB' 'Cached: 11544108 kB' 'SwapCached: 0 kB' 'Active: 8450284 kB' 'Inactive: 3688888 kB' 'Active(anon): 8033692 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588884 kB' 'Mapped: 149048 kB' 'Shmem: 7447944 kB' 'KReclaimable: 218060 kB' 'Slab: 884180 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666120 kB' 'KernelStack: 21680 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9305716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214256 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.650 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.651 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26076924 kB' 'MemUsed: 6508444 kB' 'SwapCached: 0 kB' 'Active: 3347344 kB' 'Inactive: 102080 kB' 'Active(anon): 3055472 kB' 'Inactive(anon): 0 kB' 'Active(file): 291872 kB' 'Inactive(file): 102080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3013108 kB' 'Mapped: 125348 kB' 'AnonPages: 439424 kB' 'Shmem: 2619156 kB' 'KernelStack: 12648 kB' 'PageTables: 5352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128164 kB' 'Slab: 425848 kB' 'SReclaimable: 128164 kB' 'SUnreclaim: 297684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.652 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.653 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:11.654 node0=1024 expecting 1024 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.654 10:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:14.949 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.949 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.949 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42618740 kB' 'MemAvailable: 46336384 kB' 'Buffers: 9316 kB' 'Cached: 11544200 kB' 'SwapCached: 0 kB' 'Active: 8452612 kB' 'Inactive: 3688888 kB' 'Active(anon): 8036020 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590776 kB' 'Mapped: 149156 kB' 'Shmem: 7448036 kB' 'KReclaimable: 218060 kB' 'Slab: 884140 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666080 kB' 'KernelStack: 21776 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9309204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214352 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.949 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.950 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42623768 kB' 'MemAvailable: 46341412 kB' 'Buffers: 9316 kB' 'Cached: 11544204 kB' 'SwapCached: 0 kB' 'Active: 8451820 kB' 'Inactive: 3688888 kB' 'Active(anon): 8035228 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590496 kB' 'Mapped: 149052 kB' 'Shmem: 7448040 kB' 'KReclaimable: 218060 kB' 'Slab: 884140 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666080 kB' 'KernelStack: 21744 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9309220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214304 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.951 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.216 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.217 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42624796 kB' 'MemAvailable: 46342440 kB' 'Buffers: 9316 kB' 'Cached: 11544204 kB' 'SwapCached: 0 kB' 'Active: 8451992 kB' 'Inactive: 3688888 kB' 'Active(anon): 8035400 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590604 kB' 'Mapped: 149052 kB' 'Shmem: 7448040 kB' 'KReclaimable: 218060 kB' 'Slab: 884140 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666080 kB' 'KernelStack: 21760 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9309240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214320 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.218 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.219 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:15.220 nr_hugepages=1024 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:15.220 resv_hugepages=0 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:15.220 surplus_hugepages=0 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:15.220 anon_hugepages=0 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283808 kB' 'MemFree: 42625728 kB' 'MemAvailable: 46343372 kB' 'Buffers: 9316 kB' 'Cached: 11544244 kB' 'SwapCached: 0 kB' 'Active: 8451856 kB' 'Inactive: 3688888 kB' 'Active(anon): 8035264 kB' 'Inactive(anon): 0 kB' 'Active(file): 416592 kB' 'Inactive(file): 3688888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590436 kB' 'Mapped: 149052 kB' 'Shmem: 7448080 kB' 'KReclaimable: 218060 kB' 'Slab: 884140 kB' 'SReclaimable: 218060 kB' 'SUnreclaim: 666080 kB' 'KernelStack: 21888 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481932 kB' 'Committed_AS: 9309264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 75264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 515444 kB' 'DirectMap2M: 11753472 kB' 'DirectMap1G: 57671680 kB' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.220 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.221 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26056200 kB' 'MemUsed: 6529168 kB' 'SwapCached: 0 kB' 'Active: 3349828 kB' 'Inactive: 102080 kB' 'Active(anon): 3057956 kB' 'Inactive(anon): 0 kB' 'Active(file): 291872 kB' 'Inactive(file): 102080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3013156 kB' 'Mapped: 125352 kB' 'AnonPages: 441876 kB' 'Shmem: 2619204 kB' 'KernelStack: 12760 kB' 'PageTables: 5628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128164 kB' 'Slab: 426184 kB' 'SReclaimable: 128164 kB' 'SUnreclaim: 298020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.222 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.223 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:15.224 node0=1024 expecting 1024 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.224 00:04:15.224 real 0m7.260s 00:04:15.224 user 0m2.775s 00:04:15.224 sys 0m4.622s 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.224 10:06:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.224 ************************************ 00:04:15.224 END TEST no_shrink_alloc 00:04:15.224 ************************************ 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:15.224 10:06:28 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:15.224 00:04:15.224 real 0m24.364s 00:04:15.224 user 0m8.789s 00:04:15.224 sys 0m14.571s 00:04:15.224 10:06:28 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.224 10:06:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.224 ************************************ 00:04:15.224 END TEST hugepages 00:04:15.224 ************************************ 00:04:15.224 10:06:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:15.224 10:06:28 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.224 10:06:28 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.224 10:06:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.224 ************************************ 00:04:15.224 START TEST driver 00:04:15.224 ************************************ 00:04:15.224 10:06:28 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:15.484 * Looking for test storage... 00:04:15.484 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:15.484 10:06:28 setup.sh.driver -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.484 10:06:28 setup.sh.driver -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.484 10:06:28 setup.sh.driver -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.484 10:06:29 setup.sh.driver -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:04:15.484 10:06:29 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.485 10:06:29 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:04:15.485 10:06:29 setup.sh.driver -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.485 10:06:29 setup.sh.driver -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.485 --rc genhtml_branch_coverage=1 00:04:15.485 --rc genhtml_function_coverage=1 00:04:15.485 --rc genhtml_legend=1 00:04:15.485 --rc geninfo_all_blocks=1 00:04:15.485 --rc geninfo_unexecuted_blocks=1 00:04:15.485 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.485 ' 00:04:15.485 10:06:29 setup.sh.driver -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.485 --rc genhtml_branch_coverage=1 00:04:15.485 --rc genhtml_function_coverage=1 00:04:15.485 --rc genhtml_legend=1 00:04:15.485 --rc geninfo_all_blocks=1 00:04:15.485 --rc geninfo_unexecuted_blocks=1 00:04:15.485 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.485 ' 00:04:15.485 10:06:29 setup.sh.driver -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.485 --rc genhtml_branch_coverage=1 00:04:15.485 --rc genhtml_function_coverage=1 00:04:15.485 --rc genhtml_legend=1 00:04:15.485 --rc geninfo_all_blocks=1 00:04:15.485 --rc geninfo_unexecuted_blocks=1 00:04:15.485 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.485 ' 00:04:15.485 10:06:29 setup.sh.driver -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.485 --rc genhtml_branch_coverage=1 00:04:15.485 --rc genhtml_function_coverage=1 00:04:15.485 --rc genhtml_legend=1 00:04:15.485 --rc geninfo_all_blocks=1 00:04:15.485 --rc geninfo_unexecuted_blocks=1 00:04:15.485 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:15.485 ' 00:04:15.485 10:06:29 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:15.485 10:06:29 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.485 10:06:29 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.767 10:06:33 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.767 10:06:33 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.767 10:06:33 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.767 10:06:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.767 ************************************ 00:04:20.767 START TEST guess_driver 00:04:20.767 ************************************ 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.767 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.767 Looking for driver=vfio-pci 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.767 10:06:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:20.768 10:06:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.768 10:06:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.062 10:06:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.443 10:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.443 10:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.443 10:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.703 10:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:25.703 10:06:39 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:25.703 10:06:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.703 10:06:39 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.982 00:04:30.982 real 0m10.082s 00:04:30.982 user 0m2.770s 00:04:30.982 sys 0m5.074s 00:04:30.982 10:06:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.982 10:06:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.982 ************************************ 00:04:30.982 END TEST guess_driver 00:04:30.982 ************************************ 00:04:30.982 00:04:30.982 real 0m15.297s 00:04:30.982 user 0m4.321s 00:04:30.982 sys 0m7.913s 00:04:30.982 10:06:44 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.982 10:06:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:30.982 ************************************ 00:04:30.982 END TEST driver 00:04:30.982 ************************************ 00:04:30.982 10:06:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:30.982 10:06:44 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.982 10:06:44 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.982 10:06:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.982 ************************************ 00:04:30.982 START TEST devices 00:04:30.982 ************************************ 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:30.982 * Looking for test storage... 00:04:30.982 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.982 10:06:44 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.982 --rc genhtml_branch_coverage=1 00:04:30.982 --rc genhtml_function_coverage=1 00:04:30.982 --rc genhtml_legend=1 00:04:30.982 --rc geninfo_all_blocks=1 00:04:30.982 --rc geninfo_unexecuted_blocks=1 00:04:30.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:30.982 ' 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.982 --rc genhtml_branch_coverage=1 00:04:30.982 --rc genhtml_function_coverage=1 00:04:30.982 --rc genhtml_legend=1 00:04:30.982 --rc geninfo_all_blocks=1 00:04:30.982 --rc geninfo_unexecuted_blocks=1 00:04:30.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:30.982 ' 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.982 --rc genhtml_branch_coverage=1 00:04:30.982 --rc genhtml_function_coverage=1 00:04:30.982 --rc genhtml_legend=1 00:04:30.982 --rc geninfo_all_blocks=1 00:04:30.982 --rc geninfo_unexecuted_blocks=1 00:04:30.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:30.982 ' 00:04:30.982 10:06:44 setup.sh.devices -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.982 --rc genhtml_branch_coverage=1 00:04:30.982 --rc genhtml_function_coverage=1 00:04:30.982 --rc genhtml_legend=1 00:04:30.982 --rc geninfo_all_blocks=1 00:04:30.982 --rc geninfo_unexecuted_blocks=1 00:04:30.982 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:30.982 ' 00:04:30.982 10:06:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:30.982 10:06:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:30.982 10:06:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.982 10:06:44 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:04:35.176 10:06:48 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:35.176 No valid GPT data, bailing 00:04:35.176 10:06:48 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:04:35.176 10:06:48 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:35.176 10:06:48 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.176 10:06:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.176 10:06:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.176 ************************************ 00:04:35.176 START TEST nvme_mount 00:04:35.176 ************************************ 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.176 10:06:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:35.745 Creating new GPT entries in memory. 00:04:35.745 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:35.745 other utilities. 00:04:36.004 10:06:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.004 10:06:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.004 10:06:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.004 10:06:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.004 10:06:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:36.941 Creating new GPT entries in memory. 00:04:36.941 The operation has completed successfully. 00:04:36.941 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.941 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.941 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 438123 00:04:36.941 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.941 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:36.941 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.941 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.942 10:06:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.234 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.235 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:40.494 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.494 10:06:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.753 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:40.753 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:40.753 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:40.753 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:40.753 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:40.753 10:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.754 10:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.058 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.317 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.317 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.317 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.317 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.317 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.318 10:06:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.609 10:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.610 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.610 00:04:47.610 real 0m12.861s 00:04:47.610 user 0m3.761s 00:04:47.610 sys 0m7.045s 00:04:47.610 10:07:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.610 10:07:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.610 ************************************ 00:04:47.610 END TEST nvme_mount 00:04:47.610 ************************************ 00:04:47.869 10:07:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:47.869 10:07:01 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.869 10:07:01 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.869 10:07:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.869 ************************************ 00:04:47.869 START TEST dm_mount 00:04:47.869 ************************************ 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.869 10:07:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:48.808 Creating new GPT entries in memory. 00:04:48.808 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.808 other utilities. 00:04:48.808 10:07:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.808 10:07:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.808 10:07:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.808 10:07:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.808 10:07:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.746 Creating new GPT entries in memory. 00:04:49.746 The operation has completed successfully. 00:04:49.746 10:07:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.746 10:07:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.746 10:07:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.746 10:07:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.746 10:07:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:51.126 The operation has completed successfully. 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 442649 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.126 10:07:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.421 10:07:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.711 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:57.712 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:57.971 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:57.971 00:04:57.971 real 0m10.148s 00:04:57.971 user 0m2.581s 00:04:57.971 sys 0m4.671s 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.971 10:07:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:57.971 ************************************ 00:04:57.971 END TEST dm_mount 00:04:57.971 ************************************ 00:04:57.971 10:07:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:57.971 10:07:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:57.971 10:07:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.971 10:07:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.971 10:07:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:57.971 10:07:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.971 10:07:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.231 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:58.231 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:58.231 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:58.231 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:58.231 10:07:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:58.231 10:07:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.231 10:07:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:58.231 10:07:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.231 10:07:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:58.231 10:07:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.231 10:07:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:58.231 00:04:58.231 real 0m27.579s 00:04:58.231 user 0m7.930s 00:04:58.231 sys 0m14.634s 00:04:58.231 10:07:11 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.231 10:07:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.231 ************************************ 00:04:58.231 END TEST devices 00:04:58.231 ************************************ 00:04:58.231 00:04:58.231 real 1m33.094s 00:04:58.231 user 0m29.501s 00:04:58.231 sys 0m52.766s 00:04:58.231 10:07:11 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.231 10:07:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.231 ************************************ 00:04:58.231 END TEST setup.sh 00:04:58.231 ************************************ 00:04:58.490 10:07:11 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:01.783 Hugepages 00:05:01.783 node hugesize free / total 00:05:01.783 node0 1048576kB 0 / 0 00:05:01.783 node0 2048kB 1024 / 1024 00:05:01.783 node1 1048576kB 0 / 0 00:05:01.783 node1 2048kB 1024 / 1024 00:05:01.783 00:05:01.783 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.783 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:01.783 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:02.042 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:02.042 10:07:15 -- spdk/autotest.sh@117 -- # uname -s 00:05:02.042 10:07:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:02.042 10:07:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:02.042 10:07:15 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:05.334 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:05.334 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:05.593 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:05.593 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:05.593 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:06.970 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.229 10:07:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:08.167 10:07:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:08.167 10:07:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:08.167 10:07:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:08.167 10:07:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:08.167 10:07:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:08.167 10:07:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:08.167 10:07:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.167 10:07:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.167 10:07:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:08.167 10:07:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:08.167 10:07:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:08.167 10:07:21 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.460 Waiting for block devices as requested 00:05:11.719 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:11.719 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:11.719 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:11.719 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:11.979 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:11.979 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:12.238 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:12.238 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:12.238 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:12.238 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:12.498 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:12.498 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:12.498 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:12.757 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:12.758 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:12.758 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:13.017 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:13.277 10:07:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:13.277 10:07:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:13.277 10:07:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:13.277 10:07:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:13.277 10:07:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:13.277 10:07:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:13.277 10:07:26 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:13.277 10:07:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:13.277 10:07:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:13.277 10:07:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:13.277 10:07:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:13.277 10:07:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:13.277 10:07:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:13.277 10:07:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:13.277 10:07:26 -- common/autotest_common.sh@1543 -- # continue 00:05:13.277 10:07:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:13.277 10:07:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.277 10:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 10:07:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:13.277 10:07:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.277 10:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.277 10:07:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:16.569 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.569 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.569 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.569 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.569 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.569 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.569 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.569 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.828 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:18.207 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.467 10:07:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:18.467 10:07:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.467 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.467 10:07:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:18.467 10:07:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:18.467 10:07:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:18.467 10:07:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:18.467 10:07:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:18.467 10:07:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:18.467 10:07:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:18.467 10:07:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:18.467 10:07:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:18.467 10:07:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:18.467 10:07:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.467 10:07:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.467 10:07:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:18.467 10:07:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:18.467 10:07:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:18.467 10:07:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:18.467 10:07:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:18.467 10:07:32 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:18.467 10:07:32 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:18.467 10:07:32 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:18.467 10:07:32 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:18.467 10:07:32 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:05:18.467 10:07:32 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:05:18.467 10:07:32 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=452602 00:05:18.467 10:07:32 -- common/autotest_common.sh@1585 -- # waitforlisten 452602 00:05:18.467 10:07:32 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.467 10:07:32 -- common/autotest_common.sh@835 -- # '[' -z 452602 ']' 00:05:18.467 10:07:32 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.467 10:07:32 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.467 10:07:32 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.467 10:07:32 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.467 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:18.726 [2024-12-12 10:07:32.125927] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:18.726 [2024-12-12 10:07:32.126013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452602 ] 00:05:18.726 [2024-12-12 10:07:32.213403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.726 [2024-12-12 10:07:32.256900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.985 10:07:32 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.985 10:07:32 -- common/autotest_common.sh@868 -- # return 0 00:05:18.985 10:07:32 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:18.985 10:07:32 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:18.985 10:07:32 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:22.273 nvme0n1 00:05:22.273 10:07:35 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:22.273 [2024-12-12 10:07:35.670136] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:22.273 request: 00:05:22.273 { 00:05:22.273 "nvme_ctrlr_name": "nvme0", 00:05:22.273 "password": "test", 00:05:22.273 "method": "bdev_nvme_opal_revert", 00:05:22.273 "req_id": 1 00:05:22.273 } 00:05:22.273 Got JSON-RPC error response 00:05:22.273 response: 00:05:22.273 { 00:05:22.273 "code": -32602, 00:05:22.273 "message": "Invalid parameters" 00:05:22.273 } 00:05:22.273 10:07:35 -- common/autotest_common.sh@1591 -- # true 00:05:22.273 10:07:35 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:22.273 10:07:35 -- common/autotest_common.sh@1595 -- # killprocess 452602 00:05:22.273 10:07:35 -- common/autotest_common.sh@954 -- # '[' -z 452602 ']' 00:05:22.273 10:07:35 -- common/autotest_common.sh@958 -- # kill -0 452602 00:05:22.273 10:07:35 -- common/autotest_common.sh@959 -- # uname 00:05:22.273 10:07:35 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.273 10:07:35 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452602 00:05:22.273 10:07:35 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.273 10:07:35 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.273 10:07:35 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452602' 00:05:22.273 killing process with pid 452602 00:05:22.273 10:07:35 -- common/autotest_common.sh@973 -- # kill 452602 00:05:22.273 10:07:35 -- common/autotest_common.sh@978 -- # wait 452602 00:05:24.806 10:07:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:24.806 10:07:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:24.806 10:07:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:24.806 10:07:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:24.806 10:07:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:24.806 10:07:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.806 10:07:37 -- common/autotest_common.sh@10 -- # set +x 00:05:24.806 10:07:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:24.806 10:07:37 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:24.806 10:07:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.806 10:07:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.806 10:07:37 -- common/autotest_common.sh@10 -- # set +x 00:05:24.806 ************************************ 00:05:24.806 START TEST env 00:05:24.806 ************************************ 00:05:24.806 10:07:37 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:24.806 * Looking for test storage... 00:05:24.806 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:24.806 10:07:38 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.806 10:07:38 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.806 10:07:38 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.806 10:07:38 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.806 10:07:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.806 10:07:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.806 10:07:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.806 10:07:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.806 10:07:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.806 10:07:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.806 10:07:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.806 10:07:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.806 10:07:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.806 10:07:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.806 10:07:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.806 10:07:38 env -- scripts/common.sh@344 -- # case "$op" in 00:05:24.806 10:07:38 env -- scripts/common.sh@345 -- # : 1 00:05:24.806 10:07:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.806 10:07:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.806 10:07:38 env -- scripts/common.sh@365 -- # decimal 1 00:05:24.806 10:07:38 env -- scripts/common.sh@353 -- # local d=1 00:05:24.806 10:07:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.806 10:07:38 env -- scripts/common.sh@355 -- # echo 1 00:05:24.806 10:07:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.806 10:07:38 env -- scripts/common.sh@366 -- # decimal 2 00:05:24.806 10:07:38 env -- scripts/common.sh@353 -- # local d=2 00:05:24.806 10:07:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.806 10:07:38 env -- scripts/common.sh@355 -- # echo 2 00:05:24.806 10:07:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.806 10:07:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.806 10:07:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.806 10:07:38 env -- scripts/common.sh@368 -- # return 0 00:05:24.806 10:07:38 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.806 10:07:38 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.806 --rc genhtml_branch_coverage=1 00:05:24.806 --rc genhtml_function_coverage=1 00:05:24.806 --rc genhtml_legend=1 00:05:24.806 --rc geninfo_all_blocks=1 00:05:24.806 --rc geninfo_unexecuted_blocks=1 00:05:24.806 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.806 ' 00:05:24.806 10:07:38 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.806 --rc genhtml_branch_coverage=1 00:05:24.806 --rc genhtml_function_coverage=1 00:05:24.806 --rc genhtml_legend=1 00:05:24.806 --rc geninfo_all_blocks=1 00:05:24.806 --rc geninfo_unexecuted_blocks=1 00:05:24.807 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.807 ' 00:05:24.807 10:07:38 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.807 --rc genhtml_branch_coverage=1 00:05:24.807 --rc genhtml_function_coverage=1 00:05:24.807 --rc genhtml_legend=1 00:05:24.807 --rc geninfo_all_blocks=1 00:05:24.807 --rc geninfo_unexecuted_blocks=1 00:05:24.807 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.807 ' 00:05:24.807 10:07:38 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.807 --rc genhtml_branch_coverage=1 00:05:24.807 --rc genhtml_function_coverage=1 00:05:24.807 --rc genhtml_legend=1 00:05:24.807 --rc geninfo_all_blocks=1 00:05:24.807 --rc geninfo_unexecuted_blocks=1 00:05:24.807 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:24.807 ' 00:05:24.807 10:07:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.807 10:07:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.807 10:07:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.807 10:07:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.807 ************************************ 00:05:24.807 START TEST env_memory 00:05:24.807 ************************************ 00:05:24.807 10:07:38 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.807 00:05:24.807 00:05:24.807 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.807 http://cunit.sourceforge.net/ 00:05:24.807 00:05:24.807 00:05:24.807 Suite: memory 00:05:24.807 Test: alloc and free memory map ...[2024-12-12 10:07:38.257678] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:24.807 passed 00:05:24.807 Test: mem map translation ...[2024-12-12 10:07:38.271453] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:24.807 [2024-12-12 10:07:38.271470] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:24.807 [2024-12-12 10:07:38.271501] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:24.807 [2024-12-12 10:07:38.271510] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:24.807 passed 00:05:24.807 Test: mem map registration ...[2024-12-12 10:07:38.292029] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:24.807 [2024-12-12 10:07:38.292045] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:24.807 passed 00:05:24.807 Test: mem map adjacent registrations ...passed 00:05:24.807 00:05:24.807 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.807 suites 1 1 n/a 0 0 00:05:24.807 tests 4 4 4 0 0 00:05:24.807 asserts 152 152 152 0 n/a 00:05:24.807 00:05:24.807 Elapsed time = 0.085 seconds 00:05:24.807 00:05:24.807 real 0m0.098s 00:05:24.807 user 0m0.090s 00:05:24.807 sys 0m0.008s 00:05:24.807 10:07:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.807 10:07:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:24.807 ************************************ 00:05:24.807 END TEST env_memory 00:05:24.807 ************************************ 00:05:24.807 10:07:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:24.807 10:07:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.807 10:07:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.807 10:07:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.807 ************************************ 00:05:24.807 START TEST env_vtophys 00:05:24.807 ************************************ 00:05:24.807 10:07:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:24.807 EAL: lib.eal log level changed from notice to debug 00:05:24.807 EAL: Detected lcore 0 as core 0 on socket 0 00:05:24.807 EAL: Detected lcore 1 as core 1 on socket 0 00:05:24.807 EAL: Detected lcore 2 as core 2 on socket 0 00:05:24.807 EAL: Detected lcore 3 as core 3 on socket 0 00:05:24.807 EAL: Detected lcore 4 as core 4 on socket 0 00:05:24.807 EAL: Detected lcore 5 as core 5 on socket 0 00:05:24.807 EAL: Detected lcore 6 as core 6 on socket 0 00:05:24.807 EAL: Detected lcore 7 as core 8 on socket 0 00:05:24.807 EAL: Detected lcore 8 as core 9 on socket 0 00:05:24.807 EAL: Detected lcore 9 as core 10 on socket 0 00:05:24.807 EAL: Detected lcore 10 as core 11 on socket 0 00:05:24.807 EAL: Detected lcore 11 as core 12 on socket 0 00:05:24.807 EAL: Detected lcore 12 as core 13 on socket 0 00:05:24.807 EAL: Detected lcore 13 as core 14 on socket 0 00:05:24.807 EAL: Detected lcore 14 as core 16 on socket 0 00:05:24.807 EAL: Detected lcore 15 as core 17 on socket 0 00:05:24.807 EAL: Detected lcore 16 as core 18 on socket 0 00:05:24.807 EAL: Detected lcore 17 as core 19 on socket 0 00:05:24.807 EAL: Detected lcore 18 as core 20 on socket 0 00:05:24.807 EAL: Detected lcore 19 as core 21 on socket 0 00:05:24.807 EAL: Detected lcore 20 as core 22 on socket 0 00:05:24.807 EAL: Detected lcore 21 as core 24 on socket 0 00:05:24.807 EAL: Detected lcore 22 as core 25 on socket 0 00:05:24.807 EAL: Detected lcore 23 as core 26 on socket 0 00:05:24.807 EAL: Detected lcore 24 as core 27 on socket 0 00:05:24.807 EAL: Detected lcore 25 as core 28 on socket 0 00:05:24.807 EAL: Detected lcore 26 as core 29 on socket 0 00:05:24.807 EAL: Detected lcore 27 as core 30 on socket 0 00:05:24.807 EAL: Detected lcore 28 as core 0 on socket 1 00:05:24.807 EAL: Detected lcore 29 as core 1 on socket 1 00:05:24.807 EAL: Detected lcore 30 as core 2 on socket 1 00:05:24.807 EAL: Detected lcore 31 as core 3 on socket 1 00:05:24.807 EAL: Detected lcore 32 as core 4 on socket 1 00:05:24.807 EAL: Detected lcore 33 as core 5 on socket 1 00:05:24.807 EAL: Detected lcore 34 as core 6 on socket 1 00:05:24.807 EAL: Detected lcore 35 as core 8 on socket 1 00:05:24.807 EAL: Detected lcore 36 as core 9 on socket 1 00:05:24.807 EAL: Detected lcore 37 as core 10 on socket 1 00:05:24.807 EAL: Detected lcore 38 as core 11 on socket 1 00:05:24.807 EAL: Detected lcore 39 as core 12 on socket 1 00:05:24.807 EAL: Detected lcore 40 as core 13 on socket 1 00:05:24.807 EAL: Detected lcore 41 as core 14 on socket 1 00:05:24.807 EAL: Detected lcore 42 as core 16 on socket 1 00:05:24.807 EAL: Detected lcore 43 as core 17 on socket 1 00:05:24.807 EAL: Detected lcore 44 as core 18 on socket 1 00:05:24.807 EAL: Detected lcore 45 as core 19 on socket 1 00:05:24.807 EAL: Detected lcore 46 as core 20 on socket 1 00:05:24.807 EAL: Detected lcore 47 as core 21 on socket 1 00:05:24.807 EAL: Detected lcore 48 as core 22 on socket 1 00:05:24.807 EAL: Detected lcore 49 as core 24 on socket 1 00:05:24.807 EAL: Detected lcore 50 as core 25 on socket 1 00:05:24.807 EAL: Detected lcore 51 as core 26 on socket 1 00:05:24.807 EAL: Detected lcore 52 as core 27 on socket 1 00:05:24.807 EAL: Detected lcore 53 as core 28 on socket 1 00:05:24.807 EAL: Detected lcore 54 as core 29 on socket 1 00:05:24.807 EAL: Detected lcore 55 as core 30 on socket 1 00:05:24.807 EAL: Detected lcore 56 as core 0 on socket 0 00:05:24.807 EAL: Detected lcore 57 as core 1 on socket 0 00:05:24.807 EAL: Detected lcore 58 as core 2 on socket 0 00:05:24.807 EAL: Detected lcore 59 as core 3 on socket 0 00:05:24.807 EAL: Detected lcore 60 as core 4 on socket 0 00:05:24.807 EAL: Detected lcore 61 as core 5 on socket 0 00:05:24.807 EAL: Detected lcore 62 as core 6 on socket 0 00:05:24.807 EAL: Detected lcore 63 as core 8 on socket 0 00:05:24.807 EAL: Detected lcore 64 as core 9 on socket 0 00:05:24.807 EAL: Detected lcore 65 as core 10 on socket 0 00:05:24.807 EAL: Detected lcore 66 as core 11 on socket 0 00:05:24.807 EAL: Detected lcore 67 as core 12 on socket 0 00:05:24.807 EAL: Detected lcore 68 as core 13 on socket 0 00:05:24.807 EAL: Detected lcore 69 as core 14 on socket 0 00:05:24.807 EAL: Detected lcore 70 as core 16 on socket 0 00:05:24.807 EAL: Detected lcore 71 as core 17 on socket 0 00:05:24.807 EAL: Detected lcore 72 as core 18 on socket 0 00:05:24.807 EAL: Detected lcore 73 as core 19 on socket 0 00:05:24.807 EAL: Detected lcore 74 as core 20 on socket 0 00:05:24.807 EAL: Detected lcore 75 as core 21 on socket 0 00:05:24.807 EAL: Detected lcore 76 as core 22 on socket 0 00:05:24.807 EAL: Detected lcore 77 as core 24 on socket 0 00:05:24.807 EAL: Detected lcore 78 as core 25 on socket 0 00:05:24.807 EAL: Detected lcore 79 as core 26 on socket 0 00:05:24.807 EAL: Detected lcore 80 as core 27 on socket 0 00:05:24.807 EAL: Detected lcore 81 as core 28 on socket 0 00:05:24.807 EAL: Detected lcore 82 as core 29 on socket 0 00:05:24.807 EAL: Detected lcore 83 as core 30 on socket 0 00:05:24.807 EAL: Detected lcore 84 as core 0 on socket 1 00:05:24.807 EAL: Detected lcore 85 as core 1 on socket 1 00:05:24.807 EAL: Detected lcore 86 as core 2 on socket 1 00:05:24.807 EAL: Detected lcore 87 as core 3 on socket 1 00:05:24.807 EAL: Detected lcore 88 as core 4 on socket 1 00:05:24.807 EAL: Detected lcore 89 as core 5 on socket 1 00:05:24.807 EAL: Detected lcore 90 as core 6 on socket 1 00:05:24.807 EAL: Detected lcore 91 as core 8 on socket 1 00:05:24.807 EAL: Detected lcore 92 as core 9 on socket 1 00:05:24.807 EAL: Detected lcore 93 as core 10 on socket 1 00:05:24.807 EAL: Detected lcore 94 as core 11 on socket 1 00:05:24.807 EAL: Detected lcore 95 as core 12 on socket 1 00:05:24.807 EAL: Detected lcore 96 as core 13 on socket 1 00:05:24.807 EAL: Detected lcore 97 as core 14 on socket 1 00:05:24.807 EAL: Detected lcore 98 as core 16 on socket 1 00:05:24.807 EAL: Detected lcore 99 as core 17 on socket 1 00:05:24.807 EAL: Detected lcore 100 as core 18 on socket 1 00:05:24.807 EAL: Detected lcore 101 as core 19 on socket 1 00:05:24.807 EAL: Detected lcore 102 as core 20 on socket 1 00:05:24.807 EAL: Detected lcore 103 as core 21 on socket 1 00:05:24.807 EAL: Detected lcore 104 as core 22 on socket 1 00:05:24.807 EAL: Detected lcore 105 as core 24 on socket 1 00:05:24.807 EAL: Detected lcore 106 as core 25 on socket 1 00:05:24.807 EAL: Detected lcore 107 as core 26 on socket 1 00:05:24.807 EAL: Detected lcore 108 as core 27 on socket 1 00:05:24.807 EAL: Detected lcore 109 as core 28 on socket 1 00:05:24.807 EAL: Detected lcore 110 as core 29 on socket 1 00:05:24.807 EAL: Detected lcore 111 as core 30 on socket 1 00:05:24.807 EAL: Maximum logical cores by configuration: 128 00:05:24.807 EAL: Detected CPU lcores: 112 00:05:24.808 EAL: Detected NUMA nodes: 2 00:05:24.808 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:24.808 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:24.808 EAL: Checking presence of .so 'librte_eal.so' 00:05:24.808 EAL: Detected static linkage of DPDK 00:05:24.808 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.066 EAL: Bus pci wants IOVA as 'DC' 00:05:25.066 EAL: Buses did not request a specific IOVA mode. 00:05:25.066 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:25.066 EAL: Selected IOVA mode 'VA' 00:05:25.066 EAL: Probing VFIO support... 00:05:25.066 EAL: IOMMU type 1 (Type 1) is supported 00:05:25.066 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:25.066 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:25.066 EAL: VFIO support initialized 00:05:25.066 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.066 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.066 EAL: Setting up physically contiguous memory... 00:05:25.066 EAL: Setting maximum number of open files to 524288 00:05:25.066 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.066 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:25.066 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.066 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.066 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.066 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.066 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.066 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.066 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.066 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.066 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.066 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.066 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.066 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.066 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.066 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.067 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.067 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.067 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.067 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.067 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.067 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.067 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.067 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.067 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:25.067 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.067 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:25.067 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.067 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.067 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:25.067 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:25.067 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.067 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:25.067 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.067 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.067 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:25.067 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:25.067 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.067 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:25.067 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.067 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.067 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:25.067 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:25.067 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.067 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:25.067 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.067 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.067 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:25.067 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:25.067 EAL: Hugepages will be freed exactly as allocated. 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: TSC frequency is ~2500000 KHz 00:05:25.067 EAL: Main lcore 0 is ready (tid=7fb27c6e0a00;cpuset=[0]) 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 0 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.067 00:05:25.067 00:05:25.067 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.067 http://cunit.sourceforge.net/ 00:05:25.067 00:05:25.067 00:05:25.067 Suite: components_suite 00:05:25.067 Test: vtophys_malloc_test ...passed 00:05:25.067 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.067 EAL: Trying to obtain current memory policy. 00:05:25.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.067 EAL: Restoring previous memory policy: 4 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.067 EAL: request: mp_malloc_sync 00:05:25.067 EAL: No shared files mode enabled, IPC is disabled 00:05:25.067 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.326 EAL: request: mp_malloc_sync 00:05:25.326 EAL: No shared files mode enabled, IPC is disabled 00:05:25.326 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.326 EAL: Trying to obtain current memory policy. 00:05:25.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.326 EAL: Restoring previous memory policy: 4 00:05:25.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.326 EAL: request: mp_malloc_sync 00:05:25.326 EAL: No shared files mode enabled, IPC is disabled 00:05:25.326 EAL: Heap on socket 0 was expanded by 514MB 00:05:25.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.584 EAL: request: mp_malloc_sync 00:05:25.584 EAL: No shared files mode enabled, IPC is disabled 00:05:25.584 EAL: Heap on socket 0 was shrunk by 514MB 00:05:25.584 EAL: Trying to obtain current memory policy. 00:05:25.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.584 EAL: Restoring previous memory policy: 4 00:05:25.584 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.584 EAL: request: mp_malloc_sync 00:05:25.584 EAL: No shared files mode enabled, IPC is disabled 00:05:25.584 EAL: Heap on socket 0 was expanded by 1026MB 00:05:25.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.101 EAL: request: mp_malloc_sync 00:05:26.101 EAL: No shared files mode enabled, IPC is disabled 00:05:26.101 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:26.101 passed 00:05:26.101 00:05:26.101 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.101 suites 1 1 n/a 0 0 00:05:26.101 tests 2 2 2 0 0 00:05:26.101 asserts 497 497 497 0 n/a 00:05:26.101 00:05:26.101 Elapsed time = 0.974 seconds 00:05:26.101 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.101 EAL: request: mp_malloc_sync 00:05:26.101 EAL: No shared files mode enabled, IPC is disabled 00:05:26.101 EAL: Heap on socket 0 was shrunk by 2MB 00:05:26.101 EAL: No shared files mode enabled, IPC is disabled 00:05:26.101 EAL: No shared files mode enabled, IPC is disabled 00:05:26.101 EAL: No shared files mode enabled, IPC is disabled 00:05:26.101 00:05:26.101 real 0m1.113s 00:05:26.101 user 0m0.639s 00:05:26.101 sys 0m0.447s 00:05:26.101 10:07:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.101 10:07:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:26.101 ************************************ 00:05:26.101 END TEST env_vtophys 00:05:26.101 ************************************ 00:05:26.101 10:07:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:26.101 10:07:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.101 10:07:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.101 10:07:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.101 ************************************ 00:05:26.101 START TEST env_pci 00:05:26.101 ************************************ 00:05:26.101 10:07:39 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:26.101 00:05:26.101 00:05:26.101 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.101 http://cunit.sourceforge.net/ 00:05:26.101 00:05:26.101 00:05:26.101 Suite: pci 00:05:26.101 Test: pci_hook ...[2024-12-12 10:07:39.611644] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 453908 has claimed it 00:05:26.101 EAL: Cannot find device (10000:00:01.0) 00:05:26.101 EAL: Failed to attach device on primary process 00:05:26.101 passed 00:05:26.101 00:05:26.101 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.101 suites 1 1 n/a 0 0 00:05:26.101 tests 1 1 1 0 0 00:05:26.101 asserts 25 25 25 0 n/a 00:05:26.101 00:05:26.101 Elapsed time = 0.032 seconds 00:05:26.101 00:05:26.101 real 0m0.051s 00:05:26.101 user 0m0.012s 00:05:26.101 sys 0m0.039s 00:05:26.101 10:07:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.101 10:07:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:26.101 ************************************ 00:05:26.101 END TEST env_pci 00:05:26.101 ************************************ 00:05:26.101 10:07:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:26.101 10:07:39 env -- env/env.sh@15 -- # uname 00:05:26.101 10:07:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:26.101 10:07:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:26.101 10:07:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:26.101 10:07:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:26.101 10:07:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.101 10:07:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.101 ************************************ 00:05:26.101 START TEST env_dpdk_post_init 00:05:26.101 ************************************ 00:05:26.101 10:07:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:26.360 EAL: Detected CPU lcores: 112 00:05:26.360 EAL: Detected NUMA nodes: 2 00:05:26.360 EAL: Detected static linkage of DPDK 00:05:26.360 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.360 EAL: Selected IOVA mode 'VA' 00:05:26.360 EAL: VFIO support initialized 00:05:26.360 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:26.360 EAL: Using IOMMU type 1 (Type 1) 00:05:27.298 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:30.587 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:30.587 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:30.846 Starting DPDK initialization... 00:05:30.846 Starting SPDK post initialization... 00:05:30.846 SPDK NVMe probe 00:05:30.846 Attaching to 0000:d8:00.0 00:05:30.846 Attached to 0000:d8:00.0 00:05:30.846 Cleaning up... 00:05:30.846 00:05:30.846 real 0m4.680s 00:05:30.846 user 0m3.266s 00:05:30.846 sys 0m0.657s 00:05:30.846 10:07:44 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.846 10:07:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.846 ************************************ 00:05:30.846 END TEST env_dpdk_post_init 00:05:30.846 ************************************ 00:05:30.846 10:07:44 env -- env/env.sh@26 -- # uname 00:05:30.846 10:07:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:30.846 10:07:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.846 10:07:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.846 10:07:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.846 10:07:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.105 ************************************ 00:05:31.105 START TEST env_mem_callbacks 00:05:31.105 ************************************ 00:05:31.105 10:07:44 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.105 EAL: Detected CPU lcores: 112 00:05:31.105 EAL: Detected NUMA nodes: 2 00:05:31.105 EAL: Detected static linkage of DPDK 00:05:31.105 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.105 EAL: Selected IOVA mode 'VA' 00:05:31.105 EAL: VFIO support initialized 00:05:31.105 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.105 00:05:31.105 00:05:31.105 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.105 http://cunit.sourceforge.net/ 00:05:31.105 00:05:31.105 00:05:31.105 Suite: memory 00:05:31.105 Test: test ... 00:05:31.105 register 0x200000200000 2097152 00:05:31.105 malloc 3145728 00:05:31.105 register 0x200000400000 4194304 00:05:31.105 buf 0x200000500000 len 3145728 PASSED 00:05:31.105 malloc 64 00:05:31.105 buf 0x2000004fff40 len 64 PASSED 00:05:31.105 malloc 4194304 00:05:31.105 register 0x200000800000 6291456 00:05:31.105 buf 0x200000a00000 len 4194304 PASSED 00:05:31.105 free 0x200000500000 3145728 00:05:31.105 free 0x2000004fff40 64 00:05:31.105 unregister 0x200000400000 4194304 PASSED 00:05:31.105 free 0x200000a00000 4194304 00:05:31.105 unregister 0x200000800000 6291456 PASSED 00:05:31.105 malloc 8388608 00:05:31.105 register 0x200000400000 10485760 00:05:31.105 buf 0x200000600000 len 8388608 PASSED 00:05:31.105 free 0x200000600000 8388608 00:05:31.105 unregister 0x200000400000 10485760 PASSED 00:05:31.105 passed 00:05:31.105 00:05:31.105 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.105 suites 1 1 n/a 0 0 00:05:31.105 tests 1 1 1 0 0 00:05:31.105 asserts 15 15 15 0 n/a 00:05:31.105 00:05:31.105 Elapsed time = 0.008 seconds 00:05:31.105 00:05:31.105 real 0m0.068s 00:05:31.105 user 0m0.023s 00:05:31.105 sys 0m0.045s 00:05:31.105 10:07:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.105 10:07:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:31.105 ************************************ 00:05:31.105 END TEST env_mem_callbacks 00:05:31.105 ************************************ 00:05:31.105 00:05:31.105 real 0m6.620s 00:05:31.105 user 0m4.296s 00:05:31.105 sys 0m1.587s 00:05:31.105 10:07:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.105 10:07:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.105 ************************************ 00:05:31.105 END TEST env 00:05:31.105 ************************************ 00:05:31.105 10:07:44 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.105 10:07:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.105 10:07:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.105 10:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:31.105 ************************************ 00:05:31.105 START TEST rpc 00:05:31.105 ************************************ 00:05:31.105 10:07:44 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.371 * Looking for test storage... 00:05:31.371 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:31.371 10:07:44 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.371 10:07:44 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.371 10:07:44 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.371 10:07:44 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.371 10:07:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.371 10:07:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.371 10:07:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.371 10:07:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.371 10:07:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.371 10:07:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.371 10:07:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.371 10:07:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.371 10:07:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.371 10:07:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.371 10:07:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.372 10:07:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:31.372 10:07:44 rpc -- scripts/common.sh@345 -- # : 1 00:05:31.372 10:07:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.372 10:07:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.372 10:07:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:31.372 10:07:44 rpc -- scripts/common.sh@353 -- # local d=1 00:05:31.372 10:07:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.372 10:07:44 rpc -- scripts/common.sh@355 -- # echo 1 00:05:31.372 10:07:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.372 10:07:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:31.372 10:07:44 rpc -- scripts/common.sh@353 -- # local d=2 00:05:31.372 10:07:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.372 10:07:44 rpc -- scripts/common.sh@355 -- # echo 2 00:05:31.372 10:07:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.372 10:07:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.372 10:07:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.372 10:07:44 rpc -- scripts/common.sh@368 -- # return 0 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.372 --rc genhtml_branch_coverage=1 00:05:31.372 --rc genhtml_function_coverage=1 00:05:31.372 --rc genhtml_legend=1 00:05:31.372 --rc geninfo_all_blocks=1 00:05:31.372 --rc geninfo_unexecuted_blocks=1 00:05:31.372 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:31.372 ' 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.372 --rc genhtml_branch_coverage=1 00:05:31.372 --rc genhtml_function_coverage=1 00:05:31.372 --rc genhtml_legend=1 00:05:31.372 --rc geninfo_all_blocks=1 00:05:31.372 --rc geninfo_unexecuted_blocks=1 00:05:31.372 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:31.372 ' 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.372 --rc genhtml_branch_coverage=1 00:05:31.372 --rc genhtml_function_coverage=1 00:05:31.372 --rc genhtml_legend=1 00:05:31.372 --rc geninfo_all_blocks=1 00:05:31.372 --rc geninfo_unexecuted_blocks=1 00:05:31.372 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:31.372 ' 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.372 --rc genhtml_branch_coverage=1 00:05:31.372 --rc genhtml_function_coverage=1 00:05:31.372 --rc genhtml_legend=1 00:05:31.372 --rc geninfo_all_blocks=1 00:05:31.372 --rc geninfo_unexecuted_blocks=1 00:05:31.372 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:31.372 ' 00:05:31.372 10:07:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=455063 00:05:31.372 10:07:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:31.372 10:07:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.372 10:07:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 455063 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 455063 ']' 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.372 10:07:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.372 [2024-12-12 10:07:44.921432] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:31.372 [2024-12-12 10:07:44.921493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455063 ] 00:05:31.372 [2024-12-12 10:07:45.005306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.633 [2024-12-12 10:07:45.047008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:31.633 [2024-12-12 10:07:45.047041] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 455063' to capture a snapshot of events at runtime. 00:05:31.633 [2024-12-12 10:07:45.047050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:31.633 [2024-12-12 10:07:45.047059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:31.633 [2024-12-12 10:07:45.047066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid455063 for offline analysis/debug. 00:05:31.633 [2024-12-12 10:07:45.047614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.633 10:07:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.633 10:07:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.633 10:07:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:31.633 10:07:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:31.633 10:07:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:31.633 10:07:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:31.633 10:07:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.633 10:07:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.633 10:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 ************************************ 00:05:31.891 START TEST rpc_integrity 00:05:31.891 ************************************ 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.891 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.891 { 00:05:31.891 "name": "Malloc0", 00:05:31.891 "aliases": [ 00:05:31.892 "6f538f6e-a4f9-440d-bbf7-9998307b38d5" 00:05:31.892 ], 00:05:31.892 "product_name": "Malloc disk", 00:05:31.892 "block_size": 512, 00:05:31.892 "num_blocks": 16384, 00:05:31.892 "uuid": "6f538f6e-a4f9-440d-bbf7-9998307b38d5", 00:05:31.892 "assigned_rate_limits": { 00:05:31.892 "rw_ios_per_sec": 0, 00:05:31.892 "rw_mbytes_per_sec": 0, 00:05:31.892 "r_mbytes_per_sec": 0, 00:05:31.892 "w_mbytes_per_sec": 0 00:05:31.892 }, 00:05:31.892 "claimed": false, 00:05:31.892 "zoned": false, 00:05:31.892 "supported_io_types": { 00:05:31.892 "read": true, 00:05:31.892 "write": true, 00:05:31.892 "unmap": true, 00:05:31.892 "flush": true, 00:05:31.892 "reset": true, 00:05:31.892 "nvme_admin": false, 00:05:31.892 "nvme_io": false, 00:05:31.892 "nvme_io_md": false, 00:05:31.892 "write_zeroes": true, 00:05:31.892 "zcopy": true, 00:05:31.892 "get_zone_info": false, 00:05:31.892 "zone_management": false, 00:05:31.892 "zone_append": false, 00:05:31.892 "compare": false, 00:05:31.892 "compare_and_write": false, 00:05:31.892 "abort": true, 00:05:31.892 "seek_hole": false, 00:05:31.892 "seek_data": false, 00:05:31.892 "copy": true, 00:05:31.892 "nvme_iov_md": false 00:05:31.892 }, 00:05:31.892 "memory_domains": [ 00:05:31.892 { 00:05:31.892 "dma_device_id": "system", 00:05:31.892 "dma_device_type": 1 00:05:31.892 }, 00:05:31.892 { 00:05:31.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.892 "dma_device_type": 2 00:05:31.892 } 00:05:31.892 ], 00:05:31.892 "driver_specific": {} 00:05:31.892 } 00:05:31.892 ]' 00:05:31.892 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.892 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.892 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.892 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.892 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 [2024-12-12 10:07:45.445871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.892 [2024-12-12 10:07:45.445902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.892 [2024-12-12 10:07:45.445923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x64f1bc0 00:05:31.892 [2024-12-12 10:07:45.445933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.892 [2024-12-12 10:07:45.446843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.892 [2024-12-12 10:07:45.446864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.892 Passthru0 00:05:31.892 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.892 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.892 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.892 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.892 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.892 { 00:05:31.892 "name": "Malloc0", 00:05:31.892 "aliases": [ 00:05:31.892 "6f538f6e-a4f9-440d-bbf7-9998307b38d5" 00:05:31.892 ], 00:05:31.892 "product_name": "Malloc disk", 00:05:31.892 "block_size": 512, 00:05:31.892 "num_blocks": 16384, 00:05:31.892 "uuid": "6f538f6e-a4f9-440d-bbf7-9998307b38d5", 00:05:31.892 "assigned_rate_limits": { 00:05:31.892 "rw_ios_per_sec": 0, 00:05:31.892 "rw_mbytes_per_sec": 0, 00:05:31.892 "r_mbytes_per_sec": 0, 00:05:31.892 "w_mbytes_per_sec": 0 00:05:31.892 }, 00:05:31.892 "claimed": true, 00:05:31.892 "claim_type": "exclusive_write", 00:05:31.892 "zoned": false, 00:05:31.892 "supported_io_types": { 00:05:31.892 "read": true, 00:05:31.892 "write": true, 00:05:31.892 "unmap": true, 00:05:31.892 "flush": true, 00:05:31.892 "reset": true, 00:05:31.892 "nvme_admin": false, 00:05:31.892 "nvme_io": false, 00:05:31.892 "nvme_io_md": false, 00:05:31.892 "write_zeroes": true, 00:05:31.892 "zcopy": true, 00:05:31.892 "get_zone_info": false, 00:05:31.892 "zone_management": false, 00:05:31.892 "zone_append": false, 00:05:31.892 "compare": false, 00:05:31.892 "compare_and_write": false, 00:05:31.892 "abort": true, 00:05:31.892 "seek_hole": false, 00:05:31.892 "seek_data": false, 00:05:31.892 "copy": true, 00:05:31.892 "nvme_iov_md": false 00:05:31.892 }, 00:05:31.892 "memory_domains": [ 00:05:31.892 { 00:05:31.892 "dma_device_id": "system", 00:05:31.892 "dma_device_type": 1 00:05:31.892 }, 00:05:31.892 { 00:05:31.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.892 "dma_device_type": 2 00:05:31.892 } 00:05:31.892 ], 00:05:31.892 "driver_specific": {} 00:05:31.892 }, 00:05:31.892 { 00:05:31.892 "name": "Passthru0", 00:05:31.892 "aliases": [ 00:05:31.892 "a356becf-1a81-5f15-ae75-1b5762f8c40a" 00:05:31.892 ], 00:05:31.892 "product_name": "passthru", 00:05:31.892 "block_size": 512, 00:05:31.892 "num_blocks": 16384, 00:05:31.892 "uuid": "a356becf-1a81-5f15-ae75-1b5762f8c40a", 00:05:31.892 "assigned_rate_limits": { 00:05:31.892 "rw_ios_per_sec": 0, 00:05:31.892 "rw_mbytes_per_sec": 0, 00:05:31.892 "r_mbytes_per_sec": 0, 00:05:31.892 "w_mbytes_per_sec": 0 00:05:31.892 }, 00:05:31.892 "claimed": false, 00:05:31.892 "zoned": false, 00:05:31.892 "supported_io_types": { 00:05:31.892 "read": true, 00:05:31.892 "write": true, 00:05:31.892 "unmap": true, 00:05:31.892 "flush": true, 00:05:31.892 "reset": true, 00:05:31.892 "nvme_admin": false, 00:05:31.892 "nvme_io": false, 00:05:31.892 "nvme_io_md": false, 00:05:31.892 "write_zeroes": true, 00:05:31.892 "zcopy": true, 00:05:31.892 "get_zone_info": false, 00:05:31.892 "zone_management": false, 00:05:31.892 "zone_append": false, 00:05:31.892 "compare": false, 00:05:31.892 "compare_and_write": false, 00:05:31.892 "abort": true, 00:05:31.892 "seek_hole": false, 00:05:31.892 "seek_data": false, 00:05:31.892 "copy": true, 00:05:31.892 "nvme_iov_md": false 00:05:31.892 }, 00:05:31.892 "memory_domains": [ 00:05:31.892 { 00:05:31.892 "dma_device_id": "system", 00:05:31.892 "dma_device_type": 1 00:05:31.892 }, 00:05:31.892 { 00:05:31.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.892 "dma_device_type": 2 00:05:31.892 } 00:05:31.892 ], 00:05:31.892 "driver_specific": { 00:05:31.892 "passthru": { 00:05:31.892 "name": "Passthru0", 00:05:31.892 "base_bdev_name": "Malloc0" 00:05:31.892 } 00:05:31.892 } 00:05:31.892 } 00:05:31.892 ]' 00:05:31.892 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.151 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.151 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.151 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.151 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.151 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.151 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.152 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.152 10:07:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.152 00:05:32.152 real 0m0.296s 00:05:32.152 user 0m0.181s 00:05:32.152 sys 0m0.053s 00:05:32.152 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.152 10:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.152 ************************************ 00:05:32.152 END TEST rpc_integrity 00:05:32.152 ************************************ 00:05:32.152 10:07:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:32.152 10:07:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.152 10:07:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.152 10:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.152 ************************************ 00:05:32.152 START TEST rpc_plugins 00:05:32.152 ************************************ 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:32.152 { 00:05:32.152 "name": "Malloc1", 00:05:32.152 "aliases": [ 00:05:32.152 "117ea8aa-c345-4784-8c4d-9d95fabb91d5" 00:05:32.152 ], 00:05:32.152 "product_name": "Malloc disk", 00:05:32.152 "block_size": 4096, 00:05:32.152 "num_blocks": 256, 00:05:32.152 "uuid": "117ea8aa-c345-4784-8c4d-9d95fabb91d5", 00:05:32.152 "assigned_rate_limits": { 00:05:32.152 "rw_ios_per_sec": 0, 00:05:32.152 "rw_mbytes_per_sec": 0, 00:05:32.152 "r_mbytes_per_sec": 0, 00:05:32.152 "w_mbytes_per_sec": 0 00:05:32.152 }, 00:05:32.152 "claimed": false, 00:05:32.152 "zoned": false, 00:05:32.152 "supported_io_types": { 00:05:32.152 "read": true, 00:05:32.152 "write": true, 00:05:32.152 "unmap": true, 00:05:32.152 "flush": true, 00:05:32.152 "reset": true, 00:05:32.152 "nvme_admin": false, 00:05:32.152 "nvme_io": false, 00:05:32.152 "nvme_io_md": false, 00:05:32.152 "write_zeroes": true, 00:05:32.152 "zcopy": true, 00:05:32.152 "get_zone_info": false, 00:05:32.152 "zone_management": false, 00:05:32.152 "zone_append": false, 00:05:32.152 "compare": false, 00:05:32.152 "compare_and_write": false, 00:05:32.152 "abort": true, 00:05:32.152 "seek_hole": false, 00:05:32.152 "seek_data": false, 00:05:32.152 "copy": true, 00:05:32.152 "nvme_iov_md": false 00:05:32.152 }, 00:05:32.152 "memory_domains": [ 00:05:32.152 { 00:05:32.152 "dma_device_id": "system", 00:05:32.152 "dma_device_type": 1 00:05:32.152 }, 00:05:32.152 { 00:05:32.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.152 "dma_device_type": 2 00:05:32.152 } 00:05:32.152 ], 00:05:32.152 "driver_specific": {} 00:05:32.152 } 00:05:32.152 ]' 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.152 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.152 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:32.411 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:32.411 10:07:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:32.411 00:05:32.411 real 0m0.152s 00:05:32.411 user 0m0.089s 00:05:32.411 sys 0m0.028s 00:05:32.411 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.411 10:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 ************************************ 00:05:32.411 END TEST rpc_plugins 00:05:32.411 ************************************ 00:05:32.411 10:07:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:32.411 10:07:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.411 10:07:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.411 10:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 ************************************ 00:05:32.411 START TEST rpc_trace_cmd_test 00:05:32.411 ************************************ 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:32.411 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid455063", 00:05:32.411 "tpoint_group_mask": "0x8", 00:05:32.411 "iscsi_conn": { 00:05:32.411 "mask": "0x2", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "scsi": { 00:05:32.411 "mask": "0x4", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "bdev": { 00:05:32.411 "mask": "0x8", 00:05:32.411 "tpoint_mask": "0xffffffffffffffff" 00:05:32.411 }, 00:05:32.411 "nvmf_rdma": { 00:05:32.411 "mask": "0x10", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "nvmf_tcp": { 00:05:32.411 "mask": "0x20", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "ftl": { 00:05:32.411 "mask": "0x40", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "blobfs": { 00:05:32.411 "mask": "0x80", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "dsa": { 00:05:32.411 "mask": "0x200", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "thread": { 00:05:32.411 "mask": "0x400", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "nvme_pcie": { 00:05:32.411 "mask": "0x800", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "iaa": { 00:05:32.411 "mask": "0x1000", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "nvme_tcp": { 00:05:32.411 "mask": "0x2000", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "bdev_nvme": { 00:05:32.411 "mask": "0x4000", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "sock": { 00:05:32.411 "mask": "0x8000", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "blob": { 00:05:32.411 "mask": "0x10000", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "bdev_raid": { 00:05:32.411 "mask": "0x20000", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 }, 00:05:32.411 "scheduler": { 00:05:32.411 "mask": "0x40000", 00:05:32.411 "tpoint_mask": "0x0" 00:05:32.411 } 00:05:32.411 }' 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:32.411 10:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:32.411 10:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:32.411 10:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:32.411 10:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:32.670 10:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:32.670 10:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:32.670 10:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:32.670 10:07:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:32.670 00:05:32.670 real 0m0.217s 00:05:32.670 user 0m0.169s 00:05:32.670 sys 0m0.040s 00:05:32.670 10:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.670 10:07:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.670 ************************************ 00:05:32.670 END TEST rpc_trace_cmd_test 00:05:32.670 ************************************ 00:05:32.670 10:07:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:32.670 10:07:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:32.670 10:07:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:32.670 10:07:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.670 10:07:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.670 10:07:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.670 ************************************ 00:05:32.670 START TEST rpc_daemon_integrity 00:05:32.670 ************************************ 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.670 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.929 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.929 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.929 { 00:05:32.929 "name": "Malloc2", 00:05:32.929 "aliases": [ 00:05:32.929 "9a839faa-d033-4c1c-9402-a001334287c8" 00:05:32.929 ], 00:05:32.929 "product_name": "Malloc disk", 00:05:32.929 "block_size": 512, 00:05:32.929 "num_blocks": 16384, 00:05:32.929 "uuid": "9a839faa-d033-4c1c-9402-a001334287c8", 00:05:32.929 "assigned_rate_limits": { 00:05:32.929 "rw_ios_per_sec": 0, 00:05:32.929 "rw_mbytes_per_sec": 0, 00:05:32.929 "r_mbytes_per_sec": 0, 00:05:32.929 "w_mbytes_per_sec": 0 00:05:32.929 }, 00:05:32.929 "claimed": false, 00:05:32.929 "zoned": false, 00:05:32.929 "supported_io_types": { 00:05:32.930 "read": true, 00:05:32.930 "write": true, 00:05:32.930 "unmap": true, 00:05:32.930 "flush": true, 00:05:32.930 "reset": true, 00:05:32.930 "nvme_admin": false, 00:05:32.930 "nvme_io": false, 00:05:32.930 "nvme_io_md": false, 00:05:32.930 "write_zeroes": true, 00:05:32.930 "zcopy": true, 00:05:32.930 "get_zone_info": false, 00:05:32.930 "zone_management": false, 00:05:32.930 "zone_append": false, 00:05:32.930 "compare": false, 00:05:32.930 "compare_and_write": false, 00:05:32.930 "abort": true, 00:05:32.930 "seek_hole": false, 00:05:32.930 "seek_data": false, 00:05:32.930 "copy": true, 00:05:32.930 "nvme_iov_md": false 00:05:32.930 }, 00:05:32.930 "memory_domains": [ 00:05:32.930 { 00:05:32.930 "dma_device_id": "system", 00:05:32.930 "dma_device_type": 1 00:05:32.930 }, 00:05:32.930 { 00:05:32.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.930 "dma_device_type": 2 00:05:32.930 } 00:05:32.930 ], 00:05:32.930 "driver_specific": {} 00:05:32.930 } 00:05:32.930 ]' 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 [2024-12-12 10:07:46.356206] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:32.930 [2024-12-12 10:07:46.356236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.930 [2024-12-12 10:07:46.356259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x64e8260 00:05:32.930 [2024-12-12 10:07:46.356269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.930 [2024-12-12 10:07:46.357114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.930 [2024-12-12 10:07:46.357138] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.930 Passthru0 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.930 { 00:05:32.930 "name": "Malloc2", 00:05:32.930 "aliases": [ 00:05:32.930 "9a839faa-d033-4c1c-9402-a001334287c8" 00:05:32.930 ], 00:05:32.930 "product_name": "Malloc disk", 00:05:32.930 "block_size": 512, 00:05:32.930 "num_blocks": 16384, 00:05:32.930 "uuid": "9a839faa-d033-4c1c-9402-a001334287c8", 00:05:32.930 "assigned_rate_limits": { 00:05:32.930 "rw_ios_per_sec": 0, 00:05:32.930 "rw_mbytes_per_sec": 0, 00:05:32.930 "r_mbytes_per_sec": 0, 00:05:32.930 "w_mbytes_per_sec": 0 00:05:32.930 }, 00:05:32.930 "claimed": true, 00:05:32.930 "claim_type": "exclusive_write", 00:05:32.930 "zoned": false, 00:05:32.930 "supported_io_types": { 00:05:32.930 "read": true, 00:05:32.930 "write": true, 00:05:32.930 "unmap": true, 00:05:32.930 "flush": true, 00:05:32.930 "reset": true, 00:05:32.930 "nvme_admin": false, 00:05:32.930 "nvme_io": false, 00:05:32.930 "nvme_io_md": false, 00:05:32.930 "write_zeroes": true, 00:05:32.930 "zcopy": true, 00:05:32.930 "get_zone_info": false, 00:05:32.930 "zone_management": false, 00:05:32.930 "zone_append": false, 00:05:32.930 "compare": false, 00:05:32.930 "compare_and_write": false, 00:05:32.930 "abort": true, 00:05:32.930 "seek_hole": false, 00:05:32.930 "seek_data": false, 00:05:32.930 "copy": true, 00:05:32.930 "nvme_iov_md": false 00:05:32.930 }, 00:05:32.930 "memory_domains": [ 00:05:32.930 { 00:05:32.930 "dma_device_id": "system", 00:05:32.930 "dma_device_type": 1 00:05:32.930 }, 00:05:32.930 { 00:05:32.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.930 "dma_device_type": 2 00:05:32.930 } 00:05:32.930 ], 00:05:32.930 "driver_specific": {} 00:05:32.930 }, 00:05:32.930 { 00:05:32.930 "name": "Passthru0", 00:05:32.930 "aliases": [ 00:05:32.930 "63eda58f-e759-5970-9b87-4864aa543558" 00:05:32.930 ], 00:05:32.930 "product_name": "passthru", 00:05:32.930 "block_size": 512, 00:05:32.930 "num_blocks": 16384, 00:05:32.930 "uuid": "63eda58f-e759-5970-9b87-4864aa543558", 00:05:32.930 "assigned_rate_limits": { 00:05:32.930 "rw_ios_per_sec": 0, 00:05:32.930 "rw_mbytes_per_sec": 0, 00:05:32.930 "r_mbytes_per_sec": 0, 00:05:32.930 "w_mbytes_per_sec": 0 00:05:32.930 }, 00:05:32.930 "claimed": false, 00:05:32.930 "zoned": false, 00:05:32.930 "supported_io_types": { 00:05:32.930 "read": true, 00:05:32.930 "write": true, 00:05:32.930 "unmap": true, 00:05:32.930 "flush": true, 00:05:32.930 "reset": true, 00:05:32.930 "nvme_admin": false, 00:05:32.930 "nvme_io": false, 00:05:32.930 "nvme_io_md": false, 00:05:32.930 "write_zeroes": true, 00:05:32.930 "zcopy": true, 00:05:32.930 "get_zone_info": false, 00:05:32.930 "zone_management": false, 00:05:32.930 "zone_append": false, 00:05:32.930 "compare": false, 00:05:32.930 "compare_and_write": false, 00:05:32.930 "abort": true, 00:05:32.930 "seek_hole": false, 00:05:32.930 "seek_data": false, 00:05:32.930 "copy": true, 00:05:32.930 "nvme_iov_md": false 00:05:32.930 }, 00:05:32.930 "memory_domains": [ 00:05:32.930 { 00:05:32.930 "dma_device_id": "system", 00:05:32.930 "dma_device_type": 1 00:05:32.930 }, 00:05:32.930 { 00:05:32.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.930 "dma_device_type": 2 00:05:32.930 } 00:05:32.930 ], 00:05:32.930 "driver_specific": { 00:05:32.930 "passthru": { 00:05:32.930 "name": "Passthru0", 00:05:32.930 "base_bdev_name": "Malloc2" 00:05:32.930 } 00:05:32.930 } 00:05:32.930 } 00:05:32.930 ]' 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.930 00:05:32.930 real 0m0.289s 00:05:32.930 user 0m0.175s 00:05:32.930 sys 0m0.053s 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.930 10:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 ************************************ 00:05:32.930 END TEST rpc_daemon_integrity 00:05:32.930 ************************************ 00:05:32.930 10:07:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.930 10:07:46 rpc -- rpc/rpc.sh@84 -- # killprocess 455063 00:05:32.930 10:07:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 455063 ']' 00:05:32.930 10:07:46 rpc -- common/autotest_common.sh@958 -- # kill -0 455063 00:05:32.930 10:07:46 rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.930 10:07:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.930 10:07:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455063 00:05:33.190 10:07:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.190 10:07:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.190 10:07:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455063' 00:05:33.190 killing process with pid 455063 00:05:33.190 10:07:46 rpc -- common/autotest_common.sh@973 -- # kill 455063 00:05:33.190 10:07:46 rpc -- common/autotest_common.sh@978 -- # wait 455063 00:05:33.453 00:05:33.453 real 0m2.206s 00:05:33.453 user 0m2.763s 00:05:33.453 sys 0m0.848s 00:05:33.453 10:07:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.453 10:07:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.453 ************************************ 00:05:33.453 END TEST rpc 00:05:33.453 ************************************ 00:05:33.453 10:07:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:33.453 10:07:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.453 10:07:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.453 10:07:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.453 ************************************ 00:05:33.453 START TEST skip_rpc 00:05:33.453 ************************************ 00:05:33.453 10:07:46 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:33.453 * Looking for test storage... 00:05:33.724 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.724 10:07:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.724 --rc genhtml_branch_coverage=1 00:05:33.724 --rc genhtml_function_coverage=1 00:05:33.724 --rc genhtml_legend=1 00:05:33.724 --rc geninfo_all_blocks=1 00:05:33.724 --rc geninfo_unexecuted_blocks=1 00:05:33.724 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.724 ' 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.724 --rc genhtml_branch_coverage=1 00:05:33.724 --rc genhtml_function_coverage=1 00:05:33.724 --rc genhtml_legend=1 00:05:33.724 --rc geninfo_all_blocks=1 00:05:33.724 --rc geninfo_unexecuted_blocks=1 00:05:33.724 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.724 ' 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.724 --rc genhtml_branch_coverage=1 00:05:33.724 --rc genhtml_function_coverage=1 00:05:33.724 --rc genhtml_legend=1 00:05:33.724 --rc geninfo_all_blocks=1 00:05:33.724 --rc geninfo_unexecuted_blocks=1 00:05:33.724 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.724 ' 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.724 --rc genhtml_branch_coverage=1 00:05:33.724 --rc genhtml_function_coverage=1 00:05:33.724 --rc genhtml_legend=1 00:05:33.724 --rc geninfo_all_blocks=1 00:05:33.724 --rc geninfo_unexecuted_blocks=1 00:05:33.724 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:33.724 ' 00:05:33.724 10:07:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:33.724 10:07:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:33.724 10:07:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.724 10:07:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 ************************************ 00:05:33.724 START TEST skip_rpc 00:05:33.724 ************************************ 00:05:33.724 10:07:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:33.724 10:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=455534 00:05:33.724 10:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.724 10:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:33.724 10:07:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:33.724 [2024-12-12 10:07:47.247006] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:33.724 [2024-12-12 10:07:47.247083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455534 ] 00:05:33.724 [2024-12-12 10:07:47.332058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.984 [2024-12-12 10:07:47.373229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 455534 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 455534 ']' 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 455534 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455534 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455534' 00:05:39.256 killing process with pid 455534 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 455534 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 455534 00:05:39.256 00:05:39.256 real 0m5.378s 00:05:39.256 user 0m5.143s 00:05:39.256 sys 0m0.291s 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.256 10:07:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.256 ************************************ 00:05:39.256 END TEST skip_rpc 00:05:39.256 ************************************ 00:05:39.256 10:07:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:39.256 10:07:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.256 10:07:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.256 10:07:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.256 ************************************ 00:05:39.256 START TEST skip_rpc_with_json 00:05:39.256 ************************************ 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=456611 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 456611 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 456611 ']' 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.256 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.256 [2024-12-12 10:07:52.709592] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:39.256 [2024-12-12 10:07:52.709656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456611 ] 00:05:39.256 [2024-12-12 10:07:52.793436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.256 [2024-12-12 10:07:52.833134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.516 [2024-12-12 10:07:53.057084] nvmf_rpc.c:2862:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:39.516 request: 00:05:39.516 { 00:05:39.516 "trtype": "tcp", 00:05:39.516 "method": "nvmf_get_transports", 00:05:39.516 "req_id": 1 00:05:39.516 } 00:05:39.516 Got JSON-RPC error response 00:05:39.516 response: 00:05:39.516 { 00:05:39.516 "code": -19, 00:05:39.516 "message": "No such device" 00:05:39.516 } 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.516 [2024-12-12 10:07:53.069187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.516 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.776 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.776 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:39.776 { 00:05:39.776 "subsystems": [ 00:05:39.776 { 00:05:39.776 "subsystem": "scheduler", 00:05:39.776 "config": [ 00:05:39.776 { 00:05:39.776 "method": "framework_set_scheduler", 00:05:39.776 "params": { 00:05:39.776 "name": "static" 00:05:39.776 } 00:05:39.776 } 00:05:39.776 ] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "vmd", 00:05:39.776 "config": [] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "sock", 00:05:39.776 "config": [ 00:05:39.776 { 00:05:39.776 "method": "sock_set_default_impl", 00:05:39.776 "params": { 00:05:39.776 "impl_name": "posix" 00:05:39.776 } 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "method": "sock_impl_set_options", 00:05:39.776 "params": { 00:05:39.776 "impl_name": "ssl", 00:05:39.776 "recv_buf_size": 4096, 00:05:39.776 "send_buf_size": 4096, 00:05:39.776 "enable_recv_pipe": true, 00:05:39.776 "enable_quickack": false, 00:05:39.776 "enable_placement_id": 0, 00:05:39.776 "enable_zerocopy_send_server": true, 00:05:39.776 "enable_zerocopy_send_client": false, 00:05:39.776 "zerocopy_threshold": 0, 00:05:39.776 "tls_version": 0, 00:05:39.776 "enable_ktls": false 00:05:39.776 } 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "method": "sock_impl_set_options", 00:05:39.776 "params": { 00:05:39.776 "impl_name": "posix", 00:05:39.776 "recv_buf_size": 2097152, 00:05:39.776 "send_buf_size": 2097152, 00:05:39.776 "enable_recv_pipe": true, 00:05:39.776 "enable_quickack": false, 00:05:39.776 "enable_placement_id": 0, 00:05:39.776 "enable_zerocopy_send_server": true, 00:05:39.776 "enable_zerocopy_send_client": false, 00:05:39.776 "zerocopy_threshold": 0, 00:05:39.776 "tls_version": 0, 00:05:39.776 "enable_ktls": false 00:05:39.776 } 00:05:39.776 } 00:05:39.776 ] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "iobuf", 00:05:39.776 "config": [ 00:05:39.776 { 00:05:39.776 "method": "iobuf_set_options", 00:05:39.776 "params": { 00:05:39.776 "small_pool_count": 8192, 00:05:39.776 "large_pool_count": 1024, 00:05:39.776 "small_bufsize": 8192, 00:05:39.776 "large_bufsize": 135168, 00:05:39.776 "enable_numa": false 00:05:39.776 } 00:05:39.776 } 00:05:39.776 ] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "keyring", 00:05:39.776 "config": [] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "vfio_user_target", 00:05:39.776 "config": null 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "fsdev", 00:05:39.776 "config": [ 00:05:39.776 { 00:05:39.776 "method": "fsdev_set_opts", 00:05:39.776 "params": { 00:05:39.776 "fsdev_io_pool_size": 65535, 00:05:39.776 "fsdev_io_cache_size": 256 00:05:39.776 } 00:05:39.776 } 00:05:39.776 ] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "accel", 00:05:39.776 "config": [ 00:05:39.776 { 00:05:39.776 "method": "accel_set_options", 00:05:39.776 "params": { 00:05:39.776 "small_cache_size": 128, 00:05:39.776 "large_cache_size": 16, 00:05:39.776 "task_count": 2048, 00:05:39.776 "sequence_count": 2048, 00:05:39.776 "buf_count": 2048 00:05:39.776 } 00:05:39.776 } 00:05:39.776 ] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "bdev", 00:05:39.776 "config": [ 00:05:39.776 { 00:05:39.776 "method": "bdev_set_options", 00:05:39.776 "params": { 00:05:39.776 "bdev_io_pool_size": 65535, 00:05:39.776 "bdev_io_cache_size": 256, 00:05:39.776 "bdev_auto_examine": true, 00:05:39.776 "iobuf_small_cache_size": 128, 00:05:39.776 "iobuf_large_cache_size": 16 00:05:39.776 } 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "method": "bdev_raid_set_options", 00:05:39.776 "params": { 00:05:39.776 "process_window_size_kb": 1024, 00:05:39.776 "process_max_bandwidth_mb_sec": 0 00:05:39.776 } 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "method": "bdev_nvme_set_options", 00:05:39.776 "params": { 00:05:39.776 "action_on_timeout": "none", 00:05:39.776 "timeout_us": 0, 00:05:39.776 "timeout_admin_us": 0, 00:05:39.776 "keep_alive_timeout_ms": 10000, 00:05:39.776 "arbitration_burst": 0, 00:05:39.776 "low_priority_weight": 0, 00:05:39.776 "medium_priority_weight": 0, 00:05:39.776 "high_priority_weight": 0, 00:05:39.776 "nvme_adminq_poll_period_us": 10000, 00:05:39.776 "nvme_ioq_poll_period_us": 0, 00:05:39.776 "io_queue_requests": 0, 00:05:39.776 "delay_cmd_submit": true, 00:05:39.776 "transport_retry_count": 4, 00:05:39.776 "bdev_retry_count": 3, 00:05:39.776 "transport_ack_timeout": 0, 00:05:39.776 "ctrlr_loss_timeout_sec": 0, 00:05:39.776 "reconnect_delay_sec": 0, 00:05:39.776 "fast_io_fail_timeout_sec": 0, 00:05:39.776 "disable_auto_failback": false, 00:05:39.776 "generate_uuids": false, 00:05:39.776 "transport_tos": 0, 00:05:39.776 "nvme_error_stat": false, 00:05:39.776 "rdma_srq_size": 0, 00:05:39.776 "io_path_stat": false, 00:05:39.776 "allow_accel_sequence": false, 00:05:39.776 "rdma_max_cq_size": 0, 00:05:39.776 "rdma_cm_event_timeout_ms": 0, 00:05:39.776 "dhchap_digests": [ 00:05:39.776 "sha256", 00:05:39.776 "sha384", 00:05:39.776 "sha512" 00:05:39.776 ], 00:05:39.776 "dhchap_dhgroups": [ 00:05:39.776 "null", 00:05:39.776 "ffdhe2048", 00:05:39.776 "ffdhe3072", 00:05:39.776 "ffdhe4096", 00:05:39.776 "ffdhe6144", 00:05:39.776 "ffdhe8192" 00:05:39.776 ], 00:05:39.776 "rdma_umr_per_io": false 00:05:39.776 } 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "method": "bdev_nvme_set_hotplug", 00:05:39.776 "params": { 00:05:39.776 "period_us": 100000, 00:05:39.776 "enable": false 00:05:39.776 } 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "method": "bdev_iscsi_set_options", 00:05:39.776 "params": { 00:05:39.776 "timeout_sec": 30 00:05:39.776 } 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "method": "bdev_wait_for_examine" 00:05:39.776 } 00:05:39.776 ] 00:05:39.776 }, 00:05:39.776 { 00:05:39.776 "subsystem": "nvmf", 00:05:39.776 "config": [ 00:05:39.776 { 00:05:39.776 "method": "nvmf_set_config", 00:05:39.776 "params": { 00:05:39.776 "discovery_filter": "match_any", 00:05:39.776 "admin_cmd_passthru": { 00:05:39.776 "identify_ctrlr": false 00:05:39.776 }, 00:05:39.776 "dhchap_digests": [ 00:05:39.776 "sha256", 00:05:39.776 "sha384", 00:05:39.776 "sha512" 00:05:39.776 ], 00:05:39.776 "dhchap_dhgroups": [ 00:05:39.776 "null", 00:05:39.776 "ffdhe2048", 00:05:39.776 "ffdhe3072", 00:05:39.776 "ffdhe4096", 00:05:39.776 "ffdhe6144", 00:05:39.776 "ffdhe8192" 00:05:39.776 ] 00:05:39.777 } 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "method": "nvmf_set_max_subsystems", 00:05:39.777 "params": { 00:05:39.777 "max_subsystems": 1024 00:05:39.777 } 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "method": "nvmf_set_crdt", 00:05:39.777 "params": { 00:05:39.777 "crdt1": 0, 00:05:39.777 "crdt2": 0, 00:05:39.777 "crdt3": 0 00:05:39.777 } 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "method": "nvmf_create_transport", 00:05:39.777 "params": { 00:05:39.777 "trtype": "TCP", 00:05:39.777 "max_queue_depth": 128, 00:05:39.777 "max_io_qpairs_per_ctrlr": 127, 00:05:39.777 "in_capsule_data_size": 4096, 00:05:39.777 "max_io_size": 131072, 00:05:39.777 "io_unit_size": 131072, 00:05:39.777 "max_aq_depth": 128, 00:05:39.777 "num_shared_buffers": 511, 00:05:39.777 "buf_cache_size": 4294967295, 00:05:39.777 "dif_insert_or_strip": false, 00:05:39.777 "zcopy": false, 00:05:39.777 "c2h_success": true, 00:05:39.777 "sock_priority": 0, 00:05:39.777 "abort_timeout_sec": 1, 00:05:39.777 "ack_timeout": 0, 00:05:39.777 "data_wr_pool_size": 0 00:05:39.777 } 00:05:39.777 } 00:05:39.777 ] 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "subsystem": "nbd", 00:05:39.777 "config": [] 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "subsystem": "ublk", 00:05:39.777 "config": [] 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "subsystem": "vhost_blk", 00:05:39.777 "config": [] 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "subsystem": "scsi", 00:05:39.777 "config": null 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "subsystem": "iscsi", 00:05:39.777 "config": [ 00:05:39.777 { 00:05:39.777 "method": "iscsi_set_options", 00:05:39.777 "params": { 00:05:39.777 "node_base": "iqn.2016-06.io.spdk", 00:05:39.777 "max_sessions": 128, 00:05:39.777 "max_connections_per_session": 2, 00:05:39.777 "max_queue_depth": 64, 00:05:39.777 "default_time2wait": 2, 00:05:39.777 "default_time2retain": 20, 00:05:39.777 "first_burst_length": 8192, 00:05:39.777 "immediate_data": true, 00:05:39.777 "allow_duplicated_isid": false, 00:05:39.777 "error_recovery_level": 0, 00:05:39.777 "nop_timeout": 60, 00:05:39.777 "nop_in_interval": 30, 00:05:39.777 "disable_chap": false, 00:05:39.777 "require_chap": false, 00:05:39.777 "mutual_chap": false, 00:05:39.777 "chap_group": 0, 00:05:39.777 "max_large_datain_per_connection": 64, 00:05:39.777 "max_r2t_per_connection": 4, 00:05:39.777 "pdu_pool_size": 36864, 00:05:39.777 "immediate_data_pool_size": 16384, 00:05:39.777 "data_out_pool_size": 2048 00:05:39.777 } 00:05:39.777 } 00:05:39.777 ] 00:05:39.777 }, 00:05:39.777 { 00:05:39.777 "subsystem": "vhost_scsi", 00:05:39.777 "config": [] 00:05:39.777 } 00:05:39.777 ] 00:05:39.777 } 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 456611 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 456611 ']' 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 456611 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456611 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456611' 00:05:39.777 killing process with pid 456611 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 456611 00:05:39.777 10:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 456611 00:05:40.036 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=456718 00:05:40.036 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:40.037 10:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 456718 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 456718 ']' 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 456718 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456718 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456718' 00:05:45.311 killing process with pid 456718 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 456718 00:05:45.311 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 456718 00:05:45.572 10:07:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:45.572 10:07:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:45.572 00:05:45.572 real 0m6.296s 00:05:45.572 user 0m5.979s 00:05:45.572 sys 0m0.656s 00:05:45.572 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.572 10:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.572 ************************************ 00:05:45.572 END TEST skip_rpc_with_json 00:05:45.572 ************************************ 00:05:45.572 10:07:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:45.572 10:07:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.572 10:07:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.572 10:07:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.572 ************************************ 00:05:45.572 START TEST skip_rpc_with_delay 00:05:45.572 ************************************ 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:45.572 [2024-12-12 10:07:59.092066] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.572 00:05:45.572 real 0m0.047s 00:05:45.572 user 0m0.022s 00:05:45.572 sys 0m0.025s 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.572 10:07:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:45.572 ************************************ 00:05:45.572 END TEST skip_rpc_with_delay 00:05:45.572 ************************************ 00:05:45.572 10:07:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:45.572 10:07:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:45.572 10:07:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:45.572 10:07:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.572 10:07:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.572 10:07:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.572 ************************************ 00:05:45.572 START TEST exit_on_failed_rpc_init 00:05:45.572 ************************************ 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=457740 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 457740 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 457740 ']' 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.572 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.832 [2024-12-12 10:07:59.222778] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:45.832 [2024-12-12 10:07:59.222854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457740 ] 00:05:45.832 [2024-12-12 10:07:59.311101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.832 [2024-12-12 10:07:59.354601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:46.091 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:46.091 [2024-12-12 10:07:59.604277] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:46.091 [2024-12-12 10:07:59.604351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457760 ] 00:05:46.091 [2024-12-12 10:07:59.690619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.350 [2024-12-12 10:07:59.731281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.350 [2024-12-12 10:07:59.731347] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:46.350 [2024-12-12 10:07:59.731360] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:46.350 [2024-12-12 10:07:59.731368] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 457740 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 457740 ']' 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 457740 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 457740 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 457740' 00:05:46.350 killing process with pid 457740 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 457740 00:05:46.350 10:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 457740 00:05:46.610 00:05:46.610 real 0m0.934s 00:05:46.610 user 0m0.951s 00:05:46.610 sys 0m0.425s 00:05:46.610 10:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.610 10:08:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.610 ************************************ 00:05:46.610 END TEST exit_on_failed_rpc_init 00:05:46.610 ************************************ 00:05:46.610 10:08:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:46.610 00:05:46.610 real 0m13.192s 00:05:46.610 user 0m12.342s 00:05:46.610 sys 0m1.730s 00:05:46.610 10:08:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.610 10:08:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.610 ************************************ 00:05:46.610 END TEST skip_rpc 00:05:46.610 ************************************ 00:05:46.610 10:08:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.610 10:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.610 10:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.610 10:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.869 ************************************ 00:05:46.869 START TEST rpc_client 00:05:46.869 ************************************ 00:05:46.869 10:08:00 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:46.869 * Looking for test storage... 00:05:46.869 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:46.869 10:08:00 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.869 10:08:00 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.869 10:08:00 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.869 10:08:00 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.869 10:08:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:46.869 10:08:00 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.869 10:08:00 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.869 --rc genhtml_branch_coverage=1 00:05:46.869 --rc genhtml_function_coverage=1 00:05:46.869 --rc genhtml_legend=1 00:05:46.869 --rc geninfo_all_blocks=1 00:05:46.869 --rc geninfo_unexecuted_blocks=1 00:05:46.869 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.869 ' 00:05:46.870 10:08:00 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.870 --rc genhtml_branch_coverage=1 00:05:46.870 --rc genhtml_function_coverage=1 00:05:46.870 --rc genhtml_legend=1 00:05:46.870 --rc geninfo_all_blocks=1 00:05:46.870 --rc geninfo_unexecuted_blocks=1 00:05:46.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.870 ' 00:05:46.870 10:08:00 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.870 --rc genhtml_branch_coverage=1 00:05:46.870 --rc genhtml_function_coverage=1 00:05:46.870 --rc genhtml_legend=1 00:05:46.870 --rc geninfo_all_blocks=1 00:05:46.870 --rc geninfo_unexecuted_blocks=1 00:05:46.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.870 ' 00:05:46.870 10:08:00 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.870 --rc genhtml_branch_coverage=1 00:05:46.870 --rc genhtml_function_coverage=1 00:05:46.870 --rc genhtml_legend=1 00:05:46.870 --rc geninfo_all_blocks=1 00:05:46.870 --rc geninfo_unexecuted_blocks=1 00:05:46.870 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.870 ' 00:05:46.870 10:08:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:46.870 OK 00:05:46.870 10:08:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.870 00:05:46.870 real 0m0.213s 00:05:46.870 user 0m0.107s 00:05:46.870 sys 0m0.123s 00:05:46.870 10:08:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.870 10:08:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.870 ************************************ 00:05:46.870 END TEST rpc_client 00:05:46.870 ************************************ 00:05:47.130 10:08:00 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.130 10:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.130 10:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.130 10:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.130 ************************************ 00:05:47.130 START TEST json_config 00:05:47.130 ************************************ 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.130 10:08:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.130 10:08:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.130 10:08:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.130 10:08:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.130 10:08:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.130 10:08:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.130 10:08:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.130 10:08:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:47.130 10:08:00 json_config -- scripts/common.sh@345 -- # : 1 00:05:47.130 10:08:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.130 10:08:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.130 10:08:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:47.130 10:08:00 json_config -- scripts/common.sh@353 -- # local d=1 00:05:47.130 10:08:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.130 10:08:00 json_config -- scripts/common.sh@355 -- # echo 1 00:05:47.130 10:08:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.130 10:08:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@353 -- # local d=2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.130 10:08:00 json_config -- scripts/common.sh@355 -- # echo 2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.130 10:08:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.130 10:08:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.130 10:08:00 json_config -- scripts/common.sh@368 -- # return 0 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.130 --rc genhtml_branch_coverage=1 00:05:47.130 --rc genhtml_function_coverage=1 00:05:47.130 --rc genhtml_legend=1 00:05:47.130 --rc geninfo_all_blocks=1 00:05:47.130 --rc geninfo_unexecuted_blocks=1 00:05:47.130 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.130 ' 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.130 --rc genhtml_branch_coverage=1 00:05:47.130 --rc genhtml_function_coverage=1 00:05:47.130 --rc genhtml_legend=1 00:05:47.130 --rc geninfo_all_blocks=1 00:05:47.130 --rc geninfo_unexecuted_blocks=1 00:05:47.130 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.130 ' 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.130 --rc genhtml_branch_coverage=1 00:05:47.130 --rc genhtml_function_coverage=1 00:05:47.130 --rc genhtml_legend=1 00:05:47.130 --rc geninfo_all_blocks=1 00:05:47.130 --rc geninfo_unexecuted_blocks=1 00:05:47.130 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.130 ' 00:05:47.130 10:08:00 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.130 --rc genhtml_branch_coverage=1 00:05:47.130 --rc genhtml_function_coverage=1 00:05:47.130 --rc genhtml_legend=1 00:05:47.130 --rc geninfo_all_blocks=1 00:05:47.130 --rc geninfo_unexecuted_blocks=1 00:05:47.130 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.130 ' 00:05:47.130 10:08:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.130 10:08:00 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:47.130 10:08:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.130 10:08:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.130 10:08:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.130 10:08:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.131 10:08:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.131 10:08:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.131 10:08:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.131 10:08:00 json_config -- paths/export.sh@5 -- # export PATH 00:05:47.131 10:08:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@51 -- # : 0 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.131 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.131 10:08:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.131 10:08:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:47.131 10:08:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:47.131 10:08:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:47.131 10:08:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:47.131 10:08:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.131 10:08:00 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:47.131 WARNING: No tests are enabled so not running JSON configuration tests 00:05:47.131 10:08:00 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:47.131 00:05:47.131 real 0m0.201s 00:05:47.131 user 0m0.123s 00:05:47.131 sys 0m0.087s 00:05:47.131 10:08:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.131 10:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.131 ************************************ 00:05:47.131 END TEST json_config 00:05:47.131 ************************************ 00:05:47.391 10:08:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.391 10:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.391 10:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.391 10:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:47.391 ************************************ 00:05:47.391 START TEST json_config_extra_key 00:05:47.391 ************************************ 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.391 10:08:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.391 ' 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.391 ' 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.391 ' 00:05:47.391 10:08:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.391 --rc genhtml_branch_coverage=1 00:05:47.391 --rc genhtml_function_coverage=1 00:05:47.391 --rc genhtml_legend=1 00:05:47.391 --rc geninfo_all_blocks=1 00:05:47.391 --rc geninfo_unexecuted_blocks=1 00:05:47.391 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:47.391 ' 00:05:47.391 10:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.391 10:08:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:47.391 10:08:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.391 10:08:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.392 10:08:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.392 10:08:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.392 10:08:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.392 10:08:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.392 10:08:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.392 10:08:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:47.392 10:08:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.392 10:08:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:47.392 10:08:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.392 10:08:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.392 10:08:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.392 10:08:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.652 10:08:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.652 10:08:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.652 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.652 10:08:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.652 10:08:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.652 10:08:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:47.652 INFO: launching applications... 00:05:47.652 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=458187 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.652 Waiting for target to run... 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 458187 /var/tmp/spdk_tgt.sock 00:05:47.652 10:08:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 458187 ']' 00:05:47.652 10:08:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.652 10:08:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:47.652 10:08:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.652 10:08:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.652 10:08:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.652 10:08:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.652 [2024-12-12 10:08:01.064964] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:47.652 [2024-12-12 10:08:01.065031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458187 ] 00:05:47.910 [2024-12-12 10:08:01.512749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.169 [2024-12-12 10:08:01.562645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.429 10:08:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.429 10:08:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:48.429 00:05:48.429 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:48.429 INFO: shutting down applications... 00:05:48.429 10:08:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 458187 ]] 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 458187 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 458187 00:05:48.429 10:08:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.998 10:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.998 10:08:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.998 10:08:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 458187 00:05:48.998 10:08:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.998 10:08:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.998 10:08:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.998 10:08:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.998 SPDK target shutdown done 00:05:48.998 10:08:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.998 Success 00:05:48.998 00:05:48.998 real 0m1.595s 00:05:48.998 user 0m1.173s 00:05:48.998 sys 0m0.599s 00:05:48.998 10:08:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.998 10:08:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.998 ************************************ 00:05:48.998 END TEST json_config_extra_key 00:05:48.998 ************************************ 00:05:48.998 10:08:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.998 10:08:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.998 10:08:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.998 10:08:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.998 ************************************ 00:05:48.998 START TEST alias_rpc 00:05:48.998 ************************************ 00:05:48.998 10:08:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.998 * Looking for test storage... 00:05:48.998 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:48.998 10:08:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.998 10:08:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.998 10:08:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:49.257 10:08:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.257 10:08:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.258 10:08:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.258 --rc genhtml_branch_coverage=1 00:05:49.258 --rc genhtml_function_coverage=1 00:05:49.258 --rc genhtml_legend=1 00:05:49.258 --rc geninfo_all_blocks=1 00:05:49.258 --rc geninfo_unexecuted_blocks=1 00:05:49.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.258 ' 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.258 --rc genhtml_branch_coverage=1 00:05:49.258 --rc genhtml_function_coverage=1 00:05:49.258 --rc genhtml_legend=1 00:05:49.258 --rc geninfo_all_blocks=1 00:05:49.258 --rc geninfo_unexecuted_blocks=1 00:05:49.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.258 ' 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.258 --rc genhtml_branch_coverage=1 00:05:49.258 --rc genhtml_function_coverage=1 00:05:49.258 --rc genhtml_legend=1 00:05:49.258 --rc geninfo_all_blocks=1 00:05:49.258 --rc geninfo_unexecuted_blocks=1 00:05:49.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.258 ' 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.258 --rc genhtml_branch_coverage=1 00:05:49.258 --rc genhtml_function_coverage=1 00:05:49.258 --rc genhtml_legend=1 00:05:49.258 --rc geninfo_all_blocks=1 00:05:49.258 --rc geninfo_unexecuted_blocks=1 00:05:49.258 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:49.258 ' 00:05:49.258 10:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.258 10:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=458523 00:05:49.258 10:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.258 10:08:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 458523 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 458523 ']' 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.258 10:08:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.258 [2024-12-12 10:08:02.734257] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:49.258 [2024-12-12 10:08:02.734331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458523 ] 00:05:49.258 [2024-12-12 10:08:02.822468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.258 [2024-12-12 10:08:02.864445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.517 10:08:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.517 10:08:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.517 10:08:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:49.776 10:08:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 458523 00:05:49.776 10:08:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 458523 ']' 00:05:49.776 10:08:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 458523 00:05:49.776 10:08:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.777 10:08:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.777 10:08:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458523 00:05:49.777 10:08:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.777 10:08:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.777 10:08:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458523' 00:05:49.777 killing process with pid 458523 00:05:49.777 10:08:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 458523 00:05:49.777 10:08:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 458523 00:05:50.036 00:05:50.036 real 0m1.137s 00:05:50.036 user 0m1.109s 00:05:50.036 sys 0m0.479s 00:05:50.036 10:08:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.036 10:08:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.036 ************************************ 00:05:50.036 END TEST alias_rpc 00:05:50.036 ************************************ 00:05:50.295 10:08:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:50.295 10:08:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.295 10:08:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.295 10:08:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.295 10:08:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.295 ************************************ 00:05:50.295 START TEST spdkcli_tcp 00:05:50.295 ************************************ 00:05:50.295 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.296 * Looking for test storage... 00:05:50.296 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.296 10:08:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.296 --rc genhtml_branch_coverage=1 00:05:50.296 --rc genhtml_function_coverage=1 00:05:50.296 --rc genhtml_legend=1 00:05:50.296 --rc geninfo_all_blocks=1 00:05:50.296 --rc geninfo_unexecuted_blocks=1 00:05:50.296 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.296 ' 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.296 --rc genhtml_branch_coverage=1 00:05:50.296 --rc genhtml_function_coverage=1 00:05:50.296 --rc genhtml_legend=1 00:05:50.296 --rc geninfo_all_blocks=1 00:05:50.296 --rc geninfo_unexecuted_blocks=1 00:05:50.296 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.296 ' 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.296 --rc genhtml_branch_coverage=1 00:05:50.296 --rc genhtml_function_coverage=1 00:05:50.296 --rc genhtml_legend=1 00:05:50.296 --rc geninfo_all_blocks=1 00:05:50.296 --rc geninfo_unexecuted_blocks=1 00:05:50.296 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.296 ' 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.296 --rc genhtml_branch_coverage=1 00:05:50.296 --rc genhtml_function_coverage=1 00:05:50.296 --rc genhtml_legend=1 00:05:50.296 --rc geninfo_all_blocks=1 00:05:50.296 --rc geninfo_unexecuted_blocks=1 00:05:50.296 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:50.296 ' 00:05:50.296 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:50.296 10:08:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:50.296 10:08:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:50.296 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.296 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.296 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.296 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.296 10:08:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.555 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=458839 00:05:50.555 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 458839 00:05:50.555 10:08:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.555 10:08:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 458839 ']' 00:05:50.555 10:08:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.555 10:08:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.555 10:08:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.555 10:08:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.555 10:08:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.555 [2024-12-12 10:08:03.962231] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:50.555 [2024-12-12 10:08:03.962305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458839 ] 00:05:50.555 [2024-12-12 10:08:04.048463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.555 [2024-12-12 10:08:04.092172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.555 [2024-12-12 10:08:04.092172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.815 10:08:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.815 10:08:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:50.815 10:08:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=458869 00:05:50.815 10:08:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.815 10:08:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.074 [ 00:05:51.074 "spdk_get_version", 00:05:51.074 "rpc_get_methods", 00:05:51.074 "notify_get_notifications", 00:05:51.074 "notify_get_types", 00:05:51.074 "trace_get_info", 00:05:51.074 "trace_get_tpoint_group_mask", 00:05:51.074 "trace_disable_tpoint_group", 00:05:51.074 "trace_enable_tpoint_group", 00:05:51.074 "trace_clear_tpoint_mask", 00:05:51.074 "trace_set_tpoint_mask", 00:05:51.074 "fsdev_set_opts", 00:05:51.074 "fsdev_get_opts", 00:05:51.074 "framework_get_pci_devices", 00:05:51.074 "framework_get_config", 00:05:51.074 "framework_get_subsystems", 00:05:51.074 "vfu_tgt_set_base_path", 00:05:51.074 "keyring_get_keys", 00:05:51.074 "iobuf_get_stats", 00:05:51.074 "iobuf_set_options", 00:05:51.074 "sock_get_default_impl", 00:05:51.074 "sock_set_default_impl", 00:05:51.074 "sock_impl_set_options", 00:05:51.074 "sock_impl_get_options", 00:05:51.074 "vmd_rescan", 00:05:51.074 "vmd_remove_device", 00:05:51.074 "vmd_enable", 00:05:51.074 "accel_get_stats", 00:05:51.074 "accel_set_options", 00:05:51.074 "accel_set_driver", 00:05:51.075 "accel_crypto_key_destroy", 00:05:51.075 "accel_crypto_keys_get", 00:05:51.075 "accel_crypto_key_create", 00:05:51.075 "accel_assign_opc", 00:05:51.075 "accel_get_module_info", 00:05:51.075 "accel_get_opc_assignments", 00:05:51.075 "bdev_get_histogram", 00:05:51.075 "bdev_enable_histogram", 00:05:51.075 "bdev_set_qos_limit", 00:05:51.075 "bdev_set_qd_sampling_period", 00:05:51.075 "bdev_get_bdevs", 00:05:51.075 "bdev_reset_iostat", 00:05:51.075 "bdev_get_iostat", 00:05:51.075 "bdev_examine", 00:05:51.075 "bdev_wait_for_examine", 00:05:51.075 "bdev_set_options", 00:05:51.075 "scsi_get_devices", 00:05:51.075 "thread_set_cpumask", 00:05:51.075 "scheduler_set_options", 00:05:51.075 "framework_get_governor", 00:05:51.075 "framework_get_scheduler", 00:05:51.075 "framework_set_scheduler", 00:05:51.075 "framework_get_reactors", 00:05:51.075 "thread_get_io_channels", 00:05:51.075 "thread_get_pollers", 00:05:51.075 "thread_get_stats", 00:05:51.075 "framework_monitor_context_switch", 00:05:51.075 "spdk_kill_instance", 00:05:51.075 "log_enable_timestamps", 00:05:51.075 "log_get_flags", 00:05:51.075 "log_clear_flag", 00:05:51.075 "log_set_flag", 00:05:51.075 "log_get_level", 00:05:51.075 "log_set_level", 00:05:51.075 "log_get_print_level", 00:05:51.075 "log_set_print_level", 00:05:51.075 "framework_enable_cpumask_locks", 00:05:51.075 "framework_disable_cpumask_locks", 00:05:51.075 "framework_wait_init", 00:05:51.075 "framework_start_init", 00:05:51.075 "virtio_blk_create_transport", 00:05:51.075 "virtio_blk_get_transports", 00:05:51.075 "vhost_controller_set_coalescing", 00:05:51.075 "vhost_get_controllers", 00:05:51.075 "vhost_delete_controller", 00:05:51.075 "vhost_create_blk_controller", 00:05:51.075 "vhost_scsi_controller_remove_target", 00:05:51.075 "vhost_scsi_controller_add_target", 00:05:51.075 "vhost_start_scsi_controller", 00:05:51.075 "vhost_create_scsi_controller", 00:05:51.075 "ublk_recover_disk", 00:05:51.075 "ublk_get_disks", 00:05:51.075 "ublk_stop_disk", 00:05:51.075 "ublk_start_disk", 00:05:51.075 "ublk_destroy_target", 00:05:51.075 "ublk_create_target", 00:05:51.075 "nbd_get_disks", 00:05:51.075 "nbd_stop_disk", 00:05:51.075 "nbd_start_disk", 00:05:51.075 "env_dpdk_get_mem_stats", 00:05:51.075 "nvmf_stop_mdns_prr", 00:05:51.075 "nvmf_publish_mdns_prr", 00:05:51.075 "nvmf_subsystem_get_listeners", 00:05:51.075 "nvmf_subsystem_get_qpairs", 00:05:51.075 "nvmf_subsystem_get_controllers", 00:05:51.075 "nvmf_get_stats", 00:05:51.075 "nvmf_get_transports", 00:05:51.075 "nvmf_create_transport", 00:05:51.075 "nvmf_get_targets", 00:05:51.075 "nvmf_delete_target", 00:05:51.075 "nvmf_create_target", 00:05:51.075 "nvmf_subsystem_allow_any_host", 00:05:51.075 "nvmf_subsystem_set_keys", 00:05:51.075 "nvmf_discovery_referral_remove_host", 00:05:51.075 "nvmf_discovery_referral_add_host", 00:05:51.075 "nvmf_subsystem_remove_host", 00:05:51.075 "nvmf_subsystem_add_host", 00:05:51.075 "nvmf_ns_remove_host", 00:05:51.075 "nvmf_ns_add_host", 00:05:51.075 "nvmf_subsystem_remove_ns", 00:05:51.075 "nvmf_subsystem_set_ns_ana_group", 00:05:51.075 "nvmf_subsystem_add_ns", 00:05:51.075 "nvmf_subsystem_listener_set_ana_state", 00:05:51.075 "nvmf_discovery_get_referrals", 00:05:51.075 "nvmf_discovery_remove_referral", 00:05:51.075 "nvmf_discovery_add_referral", 00:05:51.075 "nvmf_subsystem_remove_listener", 00:05:51.075 "nvmf_subsystem_add_listener", 00:05:51.075 "nvmf_delete_subsystem", 00:05:51.075 "nvmf_create_subsystem", 00:05:51.075 "nvmf_get_subsystems", 00:05:51.075 "nvmf_set_crdt", 00:05:51.075 "nvmf_set_config", 00:05:51.075 "nvmf_set_max_subsystems", 00:05:51.075 "iscsi_get_histogram", 00:05:51.075 "iscsi_enable_histogram", 00:05:51.075 "iscsi_set_options", 00:05:51.075 "iscsi_get_auth_groups", 00:05:51.075 "iscsi_auth_group_remove_secret", 00:05:51.075 "iscsi_auth_group_add_secret", 00:05:51.075 "iscsi_delete_auth_group", 00:05:51.075 "iscsi_create_auth_group", 00:05:51.075 "iscsi_set_discovery_auth", 00:05:51.075 "iscsi_get_options", 00:05:51.075 "iscsi_target_node_request_logout", 00:05:51.075 "iscsi_target_node_set_redirect", 00:05:51.075 "iscsi_target_node_set_auth", 00:05:51.075 "iscsi_target_node_add_lun", 00:05:51.075 "iscsi_get_stats", 00:05:51.075 "iscsi_get_connections", 00:05:51.075 "iscsi_portal_group_set_auth", 00:05:51.075 "iscsi_start_portal_group", 00:05:51.075 "iscsi_delete_portal_group", 00:05:51.075 "iscsi_create_portal_group", 00:05:51.075 "iscsi_get_portal_groups", 00:05:51.075 "iscsi_delete_target_node", 00:05:51.075 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.075 "iscsi_target_node_add_pg_ig_maps", 00:05:51.075 "iscsi_create_target_node", 00:05:51.075 "iscsi_get_target_nodes", 00:05:51.075 "iscsi_delete_initiator_group", 00:05:51.075 "iscsi_initiator_group_remove_initiators", 00:05:51.075 "iscsi_initiator_group_add_initiators", 00:05:51.075 "iscsi_create_initiator_group", 00:05:51.075 "iscsi_get_initiator_groups", 00:05:51.075 "fsdev_aio_delete", 00:05:51.075 "fsdev_aio_create", 00:05:51.075 "keyring_linux_set_options", 00:05:51.075 "keyring_file_remove_key", 00:05:51.075 "keyring_file_add_key", 00:05:51.075 "vfu_virtio_create_fs_endpoint", 00:05:51.075 "vfu_virtio_create_scsi_endpoint", 00:05:51.075 "vfu_virtio_scsi_remove_target", 00:05:51.075 "vfu_virtio_scsi_add_target", 00:05:51.075 "vfu_virtio_create_blk_endpoint", 00:05:51.075 "vfu_virtio_delete_endpoint", 00:05:51.075 "iaa_scan_accel_module", 00:05:51.075 "dsa_scan_accel_module", 00:05:51.075 "ioat_scan_accel_module", 00:05:51.075 "accel_error_inject_error", 00:05:51.075 "bdev_iscsi_delete", 00:05:51.075 "bdev_iscsi_create", 00:05:51.075 "bdev_iscsi_set_options", 00:05:51.075 "bdev_virtio_attach_controller", 00:05:51.075 "bdev_virtio_scsi_get_devices", 00:05:51.075 "bdev_virtio_detach_controller", 00:05:51.075 "bdev_virtio_blk_set_hotplug", 00:05:51.075 "bdev_ftl_set_property", 00:05:51.075 "bdev_ftl_get_properties", 00:05:51.075 "bdev_ftl_get_stats", 00:05:51.075 "bdev_ftl_unmap", 00:05:51.075 "bdev_ftl_unload", 00:05:51.075 "bdev_ftl_delete", 00:05:51.075 "bdev_ftl_load", 00:05:51.075 "bdev_ftl_create", 00:05:51.075 "bdev_aio_delete", 00:05:51.075 "bdev_aio_rescan", 00:05:51.075 "bdev_aio_create", 00:05:51.075 "blobfs_create", 00:05:51.075 "blobfs_detect", 00:05:51.075 "blobfs_set_cache_size", 00:05:51.075 "bdev_zone_block_delete", 00:05:51.075 "bdev_zone_block_create", 00:05:51.075 "bdev_delay_delete", 00:05:51.075 "bdev_delay_create", 00:05:51.075 "bdev_delay_update_latency", 00:05:51.075 "bdev_split_delete", 00:05:51.075 "bdev_split_create", 00:05:51.075 "bdev_error_inject_error", 00:05:51.075 "bdev_error_delete", 00:05:51.075 "bdev_error_create", 00:05:51.075 "bdev_raid_set_options", 00:05:51.075 "bdev_raid_remove_base_bdev", 00:05:51.075 "bdev_raid_add_base_bdev", 00:05:51.075 "bdev_raid_delete", 00:05:51.075 "bdev_raid_create", 00:05:51.075 "bdev_raid_get_bdevs", 00:05:51.075 "bdev_lvol_set_parent_bdev", 00:05:51.075 "bdev_lvol_set_parent", 00:05:51.075 "bdev_lvol_check_shallow_copy", 00:05:51.075 "bdev_lvol_start_shallow_copy", 00:05:51.075 "bdev_lvol_grow_lvstore", 00:05:51.075 "bdev_lvol_get_lvols", 00:05:51.075 "bdev_lvol_get_lvstores", 00:05:51.075 "bdev_lvol_delete", 00:05:51.075 "bdev_lvol_set_read_only", 00:05:51.075 "bdev_lvol_resize", 00:05:51.075 "bdev_lvol_decouple_parent", 00:05:51.075 "bdev_lvol_inflate", 00:05:51.075 "bdev_lvol_rename", 00:05:51.075 "bdev_lvol_clone_bdev", 00:05:51.075 "bdev_lvol_clone", 00:05:51.075 "bdev_lvol_snapshot", 00:05:51.075 "bdev_lvol_create", 00:05:51.075 "bdev_lvol_delete_lvstore", 00:05:51.075 "bdev_lvol_rename_lvstore", 00:05:51.075 "bdev_lvol_create_lvstore", 00:05:51.075 "bdev_passthru_delete", 00:05:51.075 "bdev_passthru_create", 00:05:51.075 "bdev_nvme_cuse_unregister", 00:05:51.075 "bdev_nvme_cuse_register", 00:05:51.075 "bdev_opal_new_user", 00:05:51.075 "bdev_opal_set_lock_state", 00:05:51.075 "bdev_opal_delete", 00:05:51.075 "bdev_opal_get_info", 00:05:51.075 "bdev_opal_create", 00:05:51.075 "bdev_nvme_opal_revert", 00:05:51.075 "bdev_nvme_opal_init", 00:05:51.075 "bdev_nvme_send_cmd", 00:05:51.075 "bdev_nvme_set_keys", 00:05:51.075 "bdev_nvme_get_path_iostat", 00:05:51.075 "bdev_nvme_get_mdns_discovery_info", 00:05:51.075 "bdev_nvme_stop_mdns_discovery", 00:05:51.075 "bdev_nvme_start_mdns_discovery", 00:05:51.075 "bdev_nvme_set_multipath_policy", 00:05:51.075 "bdev_nvme_set_preferred_path", 00:05:51.075 "bdev_nvme_get_io_paths", 00:05:51.075 "bdev_nvme_remove_error_injection", 00:05:51.075 "bdev_nvme_add_error_injection", 00:05:51.075 "bdev_nvme_get_discovery_info", 00:05:51.075 "bdev_nvme_stop_discovery", 00:05:51.075 "bdev_nvme_start_discovery", 00:05:51.075 "bdev_nvme_get_controller_health_info", 00:05:51.075 "bdev_nvme_disable_controller", 00:05:51.075 "bdev_nvme_enable_controller", 00:05:51.075 "bdev_nvme_reset_controller", 00:05:51.075 "bdev_nvme_get_transport_statistics", 00:05:51.075 "bdev_nvme_apply_firmware", 00:05:51.075 "bdev_nvme_detach_controller", 00:05:51.075 "bdev_nvme_get_controllers", 00:05:51.075 "bdev_nvme_attach_controller", 00:05:51.075 "bdev_nvme_set_hotplug", 00:05:51.075 "bdev_nvme_set_options", 00:05:51.075 "bdev_null_resize", 00:05:51.075 "bdev_null_delete", 00:05:51.075 "bdev_null_create", 00:05:51.075 "bdev_malloc_delete", 00:05:51.075 "bdev_malloc_create" 00:05:51.075 ] 00:05:51.075 10:08:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.075 10:08:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.076 10:08:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.076 10:08:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 458839 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 458839 ']' 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 458839 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458839 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458839' 00:05:51.076 killing process with pid 458839 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 458839 00:05:51.076 10:08:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 458839 00:05:51.335 00:05:51.335 real 0m1.177s 00:05:51.335 user 0m1.947s 00:05:51.335 sys 0m0.508s 00:05:51.335 10:08:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.335 10:08:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.335 ************************************ 00:05:51.335 END TEST spdkcli_tcp 00:05:51.335 ************************************ 00:05:51.335 10:08:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.335 10:08:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.335 10:08:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.335 10:08:04 -- common/autotest_common.sh@10 -- # set +x 00:05:51.603 ************************************ 00:05:51.603 START TEST dpdk_mem_utility 00:05:51.603 ************************************ 00:05:51.603 10:08:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.603 * Looking for test storage... 00:05:51.603 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.603 10:08:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.603 --rc genhtml_branch_coverage=1 00:05:51.603 --rc genhtml_function_coverage=1 00:05:51.603 --rc genhtml_legend=1 00:05:51.603 --rc geninfo_all_blocks=1 00:05:51.603 --rc geninfo_unexecuted_blocks=1 00:05:51.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.603 ' 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.603 --rc genhtml_branch_coverage=1 00:05:51.603 --rc genhtml_function_coverage=1 00:05:51.603 --rc genhtml_legend=1 00:05:51.603 --rc geninfo_all_blocks=1 00:05:51.603 --rc geninfo_unexecuted_blocks=1 00:05:51.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.603 ' 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.603 --rc genhtml_branch_coverage=1 00:05:51.603 --rc genhtml_function_coverage=1 00:05:51.603 --rc genhtml_legend=1 00:05:51.603 --rc geninfo_all_blocks=1 00:05:51.603 --rc geninfo_unexecuted_blocks=1 00:05:51.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.603 ' 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.603 --rc genhtml_branch_coverage=1 00:05:51.603 --rc genhtml_function_coverage=1 00:05:51.603 --rc genhtml_legend=1 00:05:51.603 --rc geninfo_all_blocks=1 00:05:51.603 --rc geninfo_unexecuted_blocks=1 00:05:51.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:51.603 ' 00:05:51.603 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.603 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=459177 00:05:51.603 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.603 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 459177 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 459177 ']' 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.603 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.603 [2024-12-12 10:08:05.197029] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:51.603 [2024-12-12 10:08:05.197092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459177 ] 00:05:51.863 [2024-12-12 10:08:05.281180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.863 [2024-12-12 10:08:05.323158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.123 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.123 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:52.123 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.123 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.123 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.123 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.123 { 00:05:52.123 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.123 } 00:05:52.123 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.123 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.123 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:52.123 1 heaps totaling size 818.000000 MiB 00:05:52.123 size: 818.000000 MiB heap id: 0 00:05:52.123 end heaps---------- 00:05:52.123 9 mempools totaling size 603.782043 MiB 00:05:52.123 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:52.123 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:52.123 size: 100.555481 MiB name: bdev_io_459177 00:05:52.123 size: 50.003479 MiB name: msgpool_459177 00:05:52.123 size: 36.509338 MiB name: fsdev_io_459177 00:05:52.123 size: 21.763794 MiB name: PDU_Pool 00:05:52.123 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:52.123 size: 4.133484 MiB name: evtpool_459177 00:05:52.123 size: 0.026123 MiB name: Session_Pool 00:05:52.123 end mempools------- 00:05:52.123 6 memzones totaling size 4.142822 MiB 00:05:52.123 size: 1.000366 MiB name: RG_ring_0_459177 00:05:52.123 size: 1.000366 MiB name: RG_ring_1_459177 00:05:52.123 size: 1.000366 MiB name: RG_ring_4_459177 00:05:52.123 size: 1.000366 MiB name: RG_ring_5_459177 00:05:52.123 size: 0.125366 MiB name: RG_ring_2_459177 00:05:52.123 size: 0.015991 MiB name: RG_ring_3_459177 00:05:52.123 end memzones------- 00:05:52.123 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:52.123 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:52.123 list of free elements. size: 10.852478 MiB 00:05:52.123 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:52.123 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:52.123 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:52.123 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:52.123 element at address: 0x200008000000 with size: 0.959839 MiB 00:05:52.123 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:52.123 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:52.123 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:52.123 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:52.123 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:52.123 element at address: 0x200003e00000 with size: 0.490723 MiB 00:05:52.123 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:52.123 element at address: 0x200010600000 with size: 0.481934 MiB 00:05:52.123 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:52.123 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:52.123 list of standard malloc elements. size: 199.218628 MiB 00:05:52.123 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:05:52.123 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:05:52.123 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:52.123 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:52.123 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:52.123 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:52.123 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:52.123 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:52.123 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:52.123 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20000085b100 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000008df880 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20001067b600 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:52.123 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:52.123 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:52.123 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:52.123 list of memzone associated elements. size: 607.928894 MiB 00:05:52.123 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:52.123 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:52.123 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:52.123 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:52.124 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:52.124 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_459177_0 00:05:52.124 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:52.124 associated memzone info: size: 48.002930 MiB name: MP_msgpool_459177_0 00:05:52.124 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:05:52.124 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_459177_0 00:05:52.124 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:52.124 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:52.124 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:52.124 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:52.124 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:52.124 associated memzone info: size: 3.000122 MiB name: MP_evtpool_459177_0 00:05:52.124 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:52.124 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_459177 00:05:52.124 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:52.124 associated memzone info: size: 1.007996 MiB name: MP_evtpool_459177 00:05:52.124 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:05:52.124 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:52.124 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:52.124 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:52.124 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:05:52.124 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:52.124 element at address: 0x200003efde40 with size: 1.008118 MiB 00:05:52.124 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:52.124 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:52.124 associated memzone info: size: 1.000366 MiB name: RG_ring_0_459177 00:05:52.124 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:52.124 associated memzone info: size: 1.000366 MiB name: RG_ring_1_459177 00:05:52.124 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:52.124 associated memzone info: size: 1.000366 MiB name: RG_ring_4_459177 00:05:52.124 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:52.124 associated memzone info: size: 1.000366 MiB name: RG_ring_5_459177 00:05:52.124 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:05:52.124 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_459177 00:05:52.124 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:52.124 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_459177 00:05:52.124 element at address: 0x20001067b780 with size: 0.500488 MiB 00:05:52.124 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:52.124 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:05:52.124 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:52.124 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:52.124 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:52.124 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:52.124 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_459177 00:05:52.124 element at address: 0x2000008df940 with size: 0.125488 MiB 00:05:52.124 associated memzone info: size: 0.125366 MiB name: RG_ring_2_459177 00:05:52.124 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:05:52.124 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:52.124 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:52.124 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:52.124 element at address: 0x2000008db680 with size: 0.016113 MiB 00:05:52.124 associated memzone info: size: 0.015991 MiB name: RG_ring_3_459177 00:05:52.124 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:52.124 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:52.124 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:52.124 associated memzone info: size: 0.000183 MiB name: MP_msgpool_459177 00:05:52.124 element at address: 0x2000008db480 with size: 0.000305 MiB 00:05:52.124 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_459177 00:05:52.124 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:52.124 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_459177 00:05:52.124 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:52.124 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:52.124 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:52.124 10:08:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 459177 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 459177 ']' 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 459177 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459177 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459177' 00:05:52.124 killing process with pid 459177 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 459177 00:05:52.124 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 459177 00:05:52.384 00:05:52.384 real 0m1.001s 00:05:52.384 user 0m0.905s 00:05:52.384 sys 0m0.448s 00:05:52.384 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.384 10:08:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.384 ************************************ 00:05:52.384 END TEST dpdk_mem_utility 00:05:52.384 ************************************ 00:05:52.643 10:08:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:52.643 10:08:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.643 10:08:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.643 10:08:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.643 ************************************ 00:05:52.643 START TEST event 00:05:52.643 ************************************ 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:52.643 * Looking for test storage... 00:05:52.643 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.643 10:08:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.643 10:08:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.643 10:08:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.643 10:08:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.643 10:08:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.643 10:08:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.643 10:08:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.643 10:08:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.643 10:08:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.643 10:08:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.643 10:08:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.643 10:08:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:52.643 10:08:06 event -- scripts/common.sh@345 -- # : 1 00:05:52.643 10:08:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.643 10:08:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.643 10:08:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:52.643 10:08:06 event -- scripts/common.sh@353 -- # local d=1 00:05:52.643 10:08:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.643 10:08:06 event -- scripts/common.sh@355 -- # echo 1 00:05:52.643 10:08:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.643 10:08:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:52.643 10:08:06 event -- scripts/common.sh@353 -- # local d=2 00:05:52.643 10:08:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.643 10:08:06 event -- scripts/common.sh@355 -- # echo 2 00:05:52.643 10:08:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.643 10:08:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.643 10:08:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.643 10:08:06 event -- scripts/common.sh@368 -- # return 0 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.643 --rc genhtml_branch_coverage=1 00:05:52.643 --rc genhtml_function_coverage=1 00:05:52.643 --rc genhtml_legend=1 00:05:52.643 --rc geninfo_all_blocks=1 00:05:52.643 --rc geninfo_unexecuted_blocks=1 00:05:52.643 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.643 ' 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.643 --rc genhtml_branch_coverage=1 00:05:52.643 --rc genhtml_function_coverage=1 00:05:52.643 --rc genhtml_legend=1 00:05:52.643 --rc geninfo_all_blocks=1 00:05:52.643 --rc geninfo_unexecuted_blocks=1 00:05:52.643 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.643 ' 00:05:52.643 10:08:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.643 --rc genhtml_branch_coverage=1 00:05:52.643 --rc genhtml_function_coverage=1 00:05:52.643 --rc genhtml_legend=1 00:05:52.644 --rc geninfo_all_blocks=1 00:05:52.644 --rc geninfo_unexecuted_blocks=1 00:05:52.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.644 ' 00:05:52.644 10:08:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.644 --rc genhtml_branch_coverage=1 00:05:52.644 --rc genhtml_function_coverage=1 00:05:52.644 --rc genhtml_legend=1 00:05:52.644 --rc geninfo_all_blocks=1 00:05:52.644 --rc geninfo_unexecuted_blocks=1 00:05:52.644 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:52.644 ' 00:05:52.644 10:08:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:52.644 10:08:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:52.644 10:08:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.644 10:08:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:52.644 10:08:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.644 10:08:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.902 ************************************ 00:05:52.902 START TEST event_perf 00:05:52.902 ************************************ 00:05:52.902 10:08:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:52.902 Running I/O for 1 seconds...[2024-12-12 10:08:06.326019] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:52.902 [2024-12-12 10:08:06.326114] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459501 ] 00:05:52.902 [2024-12-12 10:08:06.413818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.902 [2024-12-12 10:08:06.456902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.902 [2024-12-12 10:08:06.457012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.902 [2024-12-12 10:08:06.457117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.902 [2024-12-12 10:08:06.457118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.281 Running I/O for 1 seconds... 00:05:54.281 lcore 0: 191989 00:05:54.281 lcore 1: 191989 00:05:54.281 lcore 2: 191990 00:05:54.281 lcore 3: 191990 00:05:54.281 done. 00:05:54.281 00:05:54.281 real 0m1.187s 00:05:54.281 user 0m4.097s 00:05:54.281 sys 0m0.088s 00:05:54.281 10:08:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.281 10:08:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.281 ************************************ 00:05:54.281 END TEST event_perf 00:05:54.281 ************************************ 00:05:54.281 10:08:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.281 10:08:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:54.281 10:08:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.281 10:08:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.281 ************************************ 00:05:54.281 START TEST event_reactor 00:05:54.281 ************************************ 00:05:54.281 10:08:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:54.281 [2024-12-12 10:08:07.601931] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:54.281 [2024-12-12 10:08:07.602012] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459669 ] 00:05:54.281 [2024-12-12 10:08:07.690016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.281 [2024-12-12 10:08:07.732519] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.219 test_start 00:05:55.219 oneshot 00:05:55.219 tick 100 00:05:55.219 tick 100 00:05:55.219 tick 250 00:05:55.219 tick 100 00:05:55.219 tick 100 00:05:55.219 tick 100 00:05:55.219 tick 250 00:05:55.219 tick 500 00:05:55.219 tick 100 00:05:55.219 tick 100 00:05:55.219 tick 250 00:05:55.219 tick 100 00:05:55.219 tick 100 00:05:55.219 test_end 00:05:55.219 00:05:55.219 real 0m1.187s 00:05:55.219 user 0m1.093s 00:05:55.219 sys 0m0.091s 00:05:55.219 10:08:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.219 10:08:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:55.219 ************************************ 00:05:55.219 END TEST event_reactor 00:05:55.219 ************************************ 00:05:55.219 10:08:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.219 10:08:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:55.219 10:08:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.219 10:08:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.219 ************************************ 00:05:55.219 START TEST event_reactor_perf 00:05:55.219 ************************************ 00:05:55.219 10:08:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.479 [2024-12-12 10:08:08.874454] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:55.479 [2024-12-12 10:08:08.874568] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459834 ] 00:05:55.479 [2024-12-12 10:08:08.964054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.479 [2024-12-12 10:08:09.003406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.415 test_start 00:05:56.415 test_end 00:05:56.415 Performance: 959900 events per second 00:05:56.415 00:05:56.415 real 0m1.187s 00:05:56.415 user 0m1.090s 00:05:56.415 sys 0m0.092s 00:05:56.415 10:08:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.415 10:08:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.415 ************************************ 00:05:56.415 END TEST event_reactor_perf 00:05:56.415 ************************************ 00:05:56.674 10:08:10 event -- event/event.sh@49 -- # uname -s 00:05:56.674 10:08:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:56.674 10:08:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.674 10:08:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.674 10:08:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.674 10:08:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.674 ************************************ 00:05:56.674 START TEST event_scheduler 00:05:56.674 ************************************ 00:05:56.674 10:08:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.674 * Looking for test storage... 00:05:56.674 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:56.674 10:08:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.674 10:08:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.674 10:08:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.674 10:08:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.674 10:08:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.674 10:08:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.674 10:08:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.674 10:08:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.934 10:08:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.934 --rc genhtml_branch_coverage=1 00:05:56.934 --rc genhtml_function_coverage=1 00:05:56.934 --rc genhtml_legend=1 00:05:56.934 --rc geninfo_all_blocks=1 00:05:56.934 --rc geninfo_unexecuted_blocks=1 00:05:56.934 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.934 ' 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.934 --rc genhtml_branch_coverage=1 00:05:56.934 --rc genhtml_function_coverage=1 00:05:56.934 --rc genhtml_legend=1 00:05:56.934 --rc geninfo_all_blocks=1 00:05:56.934 --rc geninfo_unexecuted_blocks=1 00:05:56.934 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.934 ' 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.934 --rc genhtml_branch_coverage=1 00:05:56.934 --rc genhtml_function_coverage=1 00:05:56.934 --rc genhtml_legend=1 00:05:56.934 --rc geninfo_all_blocks=1 00:05:56.934 --rc geninfo_unexecuted_blocks=1 00:05:56.934 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.934 ' 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.934 --rc genhtml_branch_coverage=1 00:05:56.934 --rc genhtml_function_coverage=1 00:05:56.934 --rc genhtml_legend=1 00:05:56.934 --rc geninfo_all_blocks=1 00:05:56.934 --rc geninfo_unexecuted_blocks=1 00:05:56.934 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:56.934 ' 00:05:56.934 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.934 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=460147 00:05:56.934 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.934 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.934 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 460147 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 460147 ']' 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.934 [2024-12-12 10:08:10.354544] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:05:56.934 [2024-12-12 10:08:10.354628] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460147 ] 00:05:56.934 [2024-12-12 10:08:10.442109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.934 [2024-12-12 10:08:10.488728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.934 [2024-12-12 10:08:10.488811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.934 [2024-12-12 10:08:10.488918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.934 [2024-12-12 10:08:10.488919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:56.934 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.934 [2024-12-12 10:08:10.537628] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:56.934 [2024-12-12 10:08:10.537647] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:56.934 [2024-12-12 10:08:10.537657] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.934 [2024-12-12 10:08:10.537665] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.934 [2024-12-12 10:08:10.537672] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.934 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.934 10:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 [2024-12-12 10:08:10.612889] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.194 10:08:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.194 10:08:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.194 10:08:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.194 10:08:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.194 10:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 ************************************ 00:05:57.194 START TEST scheduler_create_thread 00:05:57.194 ************************************ 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 2 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 3 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 4 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 5 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 6 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.194 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.194 7 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.195 8 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.195 9 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.195 10 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.195 10:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.593 10:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.594 10:08:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.594 10:08:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.594 10:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.594 10:08:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.041 10:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.041 00:06:00.041 real 0m2.622s 00:06:00.041 user 0m0.023s 00:06:00.041 sys 0m0.008s 00:06:00.041 10:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.041 10:08:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.041 ************************************ 00:06:00.042 END TEST scheduler_create_thread 00:06:00.042 ************************************ 00:06:00.042 10:08:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:00.042 10:08:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 460147 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 460147 ']' 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 460147 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 460147 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 460147' 00:06:00.042 killing process with pid 460147 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 460147 00:06:00.042 10:08:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 460147 00:06:00.330 [2024-12-12 10:08:13.755239] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.330 00:06:00.330 real 0m3.785s 00:06:00.330 user 0m5.597s 00:06:00.330 sys 0m0.478s 00:06:00.330 10:08:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.330 10:08:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.330 ************************************ 00:06:00.330 END TEST event_scheduler 00:06:00.330 ************************************ 00:06:00.680 10:08:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:00.680 10:08:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:00.680 10:08:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.680 10:08:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.680 10:08:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.680 ************************************ 00:06:00.680 START TEST app_repeat 00:06:00.680 ************************************ 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=460994 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 460994' 00:06:00.680 Process app_repeat pid: 460994 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.680 spdk_app_start Round 0 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 460994 /var/tmp/spdk-nbd.sock 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 460994 ']' 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.680 [2024-12-12 10:08:14.031745] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:00.680 [2024-12-12 10:08:14.031827] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460994 ] 00:06:00.680 [2024-12-12 10:08:14.121640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.680 [2024-12-12 10:08:14.164883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.680 [2024-12-12 10:08:14.164884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.680 10:08:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:00.680 10:08:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.014 Malloc0 00:06:01.014 10:08:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.014 Malloc1 00:06:01.336 10:08:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.336 /dev/nbd0 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.336 1+0 records in 00:06:01.336 1+0 records out 00:06:01.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229 s, 17.9 MB/s 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.336 10:08:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.336 10:08:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.595 /dev/nbd1 00:06:01.595 10:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.595 10:08:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.595 1+0 records in 00:06:01.595 1+0 records out 00:06:01.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380569 s, 10.8 MB/s 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.595 10:08:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:01.595 10:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.595 10:08:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.595 10:08:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.595 10:08:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.595 10:08:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.854 10:08:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.855 { 00:06:01.855 "nbd_device": "/dev/nbd0", 00:06:01.855 "bdev_name": "Malloc0" 00:06:01.855 }, 00:06:01.855 { 00:06:01.855 "nbd_device": "/dev/nbd1", 00:06:01.855 "bdev_name": "Malloc1" 00:06:01.855 } 00:06:01.855 ]' 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.855 { 00:06:01.855 "nbd_device": "/dev/nbd0", 00:06:01.855 "bdev_name": "Malloc0" 00:06:01.855 }, 00:06:01.855 { 00:06:01.855 "nbd_device": "/dev/nbd1", 00:06:01.855 "bdev_name": "Malloc1" 00:06:01.855 } 00:06:01.855 ]' 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.855 /dev/nbd1' 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.855 /dev/nbd1' 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.855 256+0 records in 00:06:01.855 256+0 records out 00:06:01.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106596 s, 98.4 MB/s 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.855 256+0 records in 00:06:01.855 256+0 records out 00:06:01.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201185 s, 52.1 MB/s 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.855 10:08:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.114 256+0 records in 00:06:02.114 256+0 records out 00:06:02.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216937 s, 48.3 MB/s 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.114 10:08:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.374 10:08:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.633 10:08:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.633 10:08:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.893 10:08:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.152 [2024-12-12 10:08:16.566461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.152 [2024-12-12 10:08:16.603021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.152 [2024-12-12 10:08:16.603021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.152 [2024-12-12 10:08:16.644455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.152 [2024-12-12 10:08:16.644494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.444 10:08:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.444 10:08:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:06.444 spdk_app_start Round 1 00:06:06.444 10:08:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 460994 /var/tmp/spdk-nbd.sock 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 460994 ']' 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.444 10:08:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.444 10:08:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.444 Malloc0 00:06:06.444 10:08:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.444 Malloc1 00:06:06.444 10:08:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.444 10:08:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.704 /dev/nbd0 00:06:06.704 10:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.704 10:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.704 1+0 records in 00:06:06.704 1+0 records out 00:06:06.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275729 s, 14.9 MB/s 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.704 10:08:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.704 10:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.704 10:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.704 10:08:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.963 /dev/nbd1 00:06:06.963 10:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.963 10:08:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.963 1+0 records in 00:06:06.963 1+0 records out 00:06:06.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296351 s, 13.8 MB/s 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:06.963 10:08:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.964 10:08:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:06.964 10:08:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.964 10:08:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.964 10:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.964 10:08:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.964 10:08:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.964 10:08:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.964 10:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.223 10:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.223 { 00:06:07.223 "nbd_device": "/dev/nbd0", 00:06:07.223 "bdev_name": "Malloc0" 00:06:07.223 }, 00:06:07.223 { 00:06:07.223 "nbd_device": "/dev/nbd1", 00:06:07.223 "bdev_name": "Malloc1" 00:06:07.223 } 00:06:07.223 ]' 00:06:07.223 10:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.223 { 00:06:07.223 "nbd_device": "/dev/nbd0", 00:06:07.223 "bdev_name": "Malloc0" 00:06:07.223 }, 00:06:07.223 { 00:06:07.223 "nbd_device": "/dev/nbd1", 00:06:07.223 "bdev_name": "Malloc1" 00:06:07.223 } 00:06:07.223 ]' 00:06:07.223 10:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.224 /dev/nbd1' 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.224 /dev/nbd1' 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.224 256+0 records in 00:06:07.224 256+0 records out 00:06:07.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112922 s, 92.9 MB/s 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.224 256+0 records in 00:06:07.224 256+0 records out 00:06:07.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202855 s, 51.7 MB/s 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.224 10:08:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.483 256+0 records in 00:06:07.483 256+0 records out 00:06:07.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216521 s, 48.4 MB/s 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.483 10:08:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.484 10:08:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.484 10:08:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.744 10:08:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.004 10:08:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.004 10:08:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.266 10:08:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.526 [2024-12-12 10:08:21.945674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.526 [2024-12-12 10:08:21.982813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.526 [2024-12-12 10:08:21.982813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.526 [2024-12-12 10:08:22.025084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.526 [2024-12-12 10:08:22.025125] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.881 10:08:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.881 10:08:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.881 spdk_app_start Round 2 00:06:11.881 10:08:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 460994 /var/tmp/spdk-nbd.sock 00:06:11.881 10:08:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 460994 ']' 00:06:11.881 10:08:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.881 10:08:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.881 10:08:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.881 10:08:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.881 10:08:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.881 10:08:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.881 10:08:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.881 10:08:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.881 Malloc0 00:06:11.881 10:08:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.881 Malloc1 00:06:11.881 10:08:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.881 10:08:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.881 10:08:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.881 10:08:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.881 10:08:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.882 10:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.141 /dev/nbd0 00:06:12.141 10:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.141 10:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.141 1+0 records in 00:06:12.141 1+0 records out 00:06:12.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001918 s, 21.4 MB/s 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.141 10:08:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.141 10:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.141 10:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.141 10:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.400 /dev/nbd1 00:06:12.400 10:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.400 10:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.400 10:08:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.400 1+0 records in 00:06:12.400 1+0 records out 00:06:12.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251695 s, 16.3 MB/s 00:06:12.401 10:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.401 10:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.401 10:08:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:12.401 10:08:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.401 10:08:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.401 10:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.401 10:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.401 10:08:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.401 10:08:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.401 10:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.660 { 00:06:12.660 "nbd_device": "/dev/nbd0", 00:06:12.660 "bdev_name": "Malloc0" 00:06:12.660 }, 00:06:12.660 { 00:06:12.660 "nbd_device": "/dev/nbd1", 00:06:12.660 "bdev_name": "Malloc1" 00:06:12.660 } 00:06:12.660 ]' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.660 { 00:06:12.660 "nbd_device": "/dev/nbd0", 00:06:12.660 "bdev_name": "Malloc0" 00:06:12.660 }, 00:06:12.660 { 00:06:12.660 "nbd_device": "/dev/nbd1", 00:06:12.660 "bdev_name": "Malloc1" 00:06:12.660 } 00:06:12.660 ]' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.660 /dev/nbd1' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.660 /dev/nbd1' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.660 256+0 records in 00:06:12.660 256+0 records out 00:06:12.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115087 s, 91.1 MB/s 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.660 256+0 records in 00:06:12.660 256+0 records out 00:06:12.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203267 s, 51.6 MB/s 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.660 256+0 records in 00:06:12.660 256+0 records out 00:06:12.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215294 s, 48.7 MB/s 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.660 10:08:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.661 10:08:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.661 10:08:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.661 10:08:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.920 10:08:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.179 10:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.438 10:08:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.438 10:08:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.697 10:08:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.697 [2024-12-12 10:08:27.307785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.956 [2024-12-12 10:08:27.347069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.956 [2024-12-12 10:08:27.347070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.956 [2024-12-12 10:08:27.388065] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.956 [2024-12-12 10:08:27.388106] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.246 10:08:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 460994 /var/tmp/spdk-nbd.sock 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 460994 ']' 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.246 10:08:30 event.app_repeat -- event/event.sh@39 -- # killprocess 460994 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 460994 ']' 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 460994 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 460994 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 460994' 00:06:17.246 killing process with pid 460994 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 460994 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 460994 00:06:17.246 spdk_app_start is called in Round 0. 00:06:17.246 Shutdown signal received, stop current app iteration 00:06:17.246 Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 reinitialization... 00:06:17.246 spdk_app_start is called in Round 1. 00:06:17.246 Shutdown signal received, stop current app iteration 00:06:17.246 Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 reinitialization... 00:06:17.246 spdk_app_start is called in Round 2. 00:06:17.246 Shutdown signal received, stop current app iteration 00:06:17.246 Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 reinitialization... 00:06:17.246 spdk_app_start is called in Round 3. 00:06:17.246 Shutdown signal received, stop current app iteration 00:06:17.246 10:08:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.246 10:08:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.246 00:06:17.246 real 0m16.568s 00:06:17.246 user 0m35.811s 00:06:17.246 sys 0m3.220s 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.246 10:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.246 ************************************ 00:06:17.246 END TEST app_repeat 00:06:17.246 ************************************ 00:06:17.246 10:08:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.246 10:08:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.246 10:08:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.246 10:08:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.246 10:08:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.246 ************************************ 00:06:17.246 START TEST cpu_locks 00:06:17.246 ************************************ 00:06:17.246 10:08:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.246 * Looking for test storage... 00:06:17.246 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:17.246 10:08:30 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.246 10:08:30 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.246 10:08:30 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.246 10:08:30 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:17.246 10:08:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.247 10:08:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.247 --rc genhtml_branch_coverage=1 00:06:17.247 --rc genhtml_function_coverage=1 00:06:17.247 --rc genhtml_legend=1 00:06:17.247 --rc geninfo_all_blocks=1 00:06:17.247 --rc geninfo_unexecuted_blocks=1 00:06:17.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.247 ' 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.247 --rc genhtml_branch_coverage=1 00:06:17.247 --rc genhtml_function_coverage=1 00:06:17.247 --rc genhtml_legend=1 00:06:17.247 --rc geninfo_all_blocks=1 00:06:17.247 --rc geninfo_unexecuted_blocks=1 00:06:17.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.247 ' 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.247 --rc genhtml_branch_coverage=1 00:06:17.247 --rc genhtml_function_coverage=1 00:06:17.247 --rc genhtml_legend=1 00:06:17.247 --rc geninfo_all_blocks=1 00:06:17.247 --rc geninfo_unexecuted_blocks=1 00:06:17.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.247 ' 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.247 --rc genhtml_branch_coverage=1 00:06:17.247 --rc genhtml_function_coverage=1 00:06:17.247 --rc genhtml_legend=1 00:06:17.247 --rc geninfo_all_blocks=1 00:06:17.247 --rc geninfo_unexecuted_blocks=1 00:06:17.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:17.247 ' 00:06:17.247 10:08:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.247 10:08:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.247 10:08:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.247 10:08:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.247 10:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.506 ************************************ 00:06:17.506 START TEST default_locks 00:06:17.506 ************************************ 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=464137 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 464137 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 464137 ']' 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.506 10:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.506 [2024-12-12 10:08:30.915449] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:17.506 [2024-12-12 10:08:30.915509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464137 ] 00:06:17.506 [2024-12-12 10:08:31.002614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.506 [2024-12-12 10:08:31.044341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.765 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.765 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:17.765 10:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 464137 00:06:17.765 10:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 464137 00:06:17.765 10:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.333 lslocks: write error 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 464137 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 464137 ']' 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 464137 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464137 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464137' 00:06:18.333 killing process with pid 464137 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 464137 00:06:18.333 10:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 464137 00:06:18.592 10:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 464137 00:06:18.592 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:18.592 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 464137 00:06:18.592 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.592 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.592 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 464137 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 464137 ']' 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.852 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (464137) - No such process 00:06:18.852 ERROR: process (pid: 464137) is no longer running 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.852 00:06:18.852 real 0m1.345s 00:06:18.852 user 0m1.326s 00:06:18.852 sys 0m0.673s 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.852 10:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.852 ************************************ 00:06:18.852 END TEST default_locks 00:06:18.852 ************************************ 00:06:18.852 10:08:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:18.852 10:08:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.852 10:08:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.852 10:08:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.852 ************************************ 00:06:18.852 START TEST default_locks_via_rpc 00:06:18.852 ************************************ 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=464376 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 464376 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 464376 ']' 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.852 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.852 [2024-12-12 10:08:32.332638] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:18.852 [2024-12-12 10:08:32.332699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464376 ] 00:06:18.852 [2024-12-12 10:08:32.418293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.852 [2024-12-12 10:08:32.462081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 464376 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 464376 00:06:19.112 10:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 464376 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 464376 ']' 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 464376 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464376 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464376' 00:06:19.680 killing process with pid 464376 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 464376 00:06:19.680 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 464376 00:06:19.939 00:06:19.939 real 0m1.177s 00:06:19.939 user 0m1.163s 00:06:19.939 sys 0m0.550s 00:06:19.939 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.939 10:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.939 ************************************ 00:06:19.939 END TEST default_locks_via_rpc 00:06:19.939 ************************************ 00:06:19.939 10:08:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.939 10:08:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.939 10:08:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.939 10:08:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.939 ************************************ 00:06:19.939 START TEST non_locking_app_on_locked_coremask 00:06:19.939 ************************************ 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=464532 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 464532 /var/tmp/spdk.sock 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 464532 ']' 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.939 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.199 [2024-12-12 10:08:33.593931] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:20.199 [2024-12-12 10:08:33.593989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464532 ] 00:06:20.199 [2024-12-12 10:08:33.676855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.199 [2024-12-12 10:08:33.719326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=464727 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 464727 /var/tmp/spdk2.sock 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 464727 ']' 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.458 10:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.458 [2024-12-12 10:08:33.971691] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:20.458 [2024-12-12 10:08:33.971775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464727 ] 00:06:20.458 [2024-12-12 10:08:34.069462] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.458 [2024-12-12 10:08:34.069491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.717 [2024-12-12 10:08:34.149502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.286 10:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.286 10:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.286 10:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 464532 00:06:21.286 10:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 464532 00:06:21.286 10:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.663 lslocks: write error 00:06:22.663 10:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 464532 00:06:22.663 10:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 464532 ']' 00:06:22.663 10:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 464532 00:06:22.663 10:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.663 10:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.663 10:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464532 00:06:22.663 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.663 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.663 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464532' 00:06:22.663 killing process with pid 464532 00:06:22.663 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 464532 00:06:22.663 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 464532 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 464727 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 464727 ']' 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 464727 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464727 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464727' 00:06:23.230 killing process with pid 464727 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 464727 00:06:23.230 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 464727 00:06:23.489 00:06:23.489 real 0m3.387s 00:06:23.489 user 0m3.523s 00:06:23.489 sys 0m1.265s 00:06:23.489 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.489 10:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.489 ************************************ 00:06:23.489 END TEST non_locking_app_on_locked_coremask 00:06:23.489 ************************************ 00:06:23.489 10:08:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:23.489 10:08:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.489 10:08:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.489 10:08:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.489 ************************************ 00:06:23.489 START TEST locking_app_on_unlocked_coremask 00:06:23.489 ************************************ 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=465301 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 465301 /var/tmp/spdk.sock 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 465301 ']' 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.489 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.489 [2024-12-12 10:08:37.070649] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:23.489 [2024-12-12 10:08:37.070732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465301 ] 00:06:23.748 [2024-12-12 10:08:37.158858] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.748 [2024-12-12 10:08:37.158887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.748 [2024-12-12 10:08:37.201053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=465333 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 465333 /var/tmp/spdk2.sock 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 465333 ']' 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.008 10:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.008 [2024-12-12 10:08:37.428507] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:24.008 [2024-12-12 10:08:37.428567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465333 ] 00:06:24.008 [2024-12-12 10:08:37.523607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.008 [2024-12-12 10:08:37.610896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.947 10:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.947 10:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.947 10:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 465333 00:06:24.947 10:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 465333 00:06:24.947 10:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.884 lslocks: write error 00:06:25.884 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 465301 00:06:25.884 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 465301 ']' 00:06:25.884 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 465301 00:06:25.884 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.884 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.884 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465301 00:06:26.143 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.143 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.143 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465301' 00:06:26.143 killing process with pid 465301 00:06:26.143 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 465301 00:06:26.143 10:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 465301 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 465333 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 465333 ']' 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 465333 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465333 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465333' 00:06:26.712 killing process with pid 465333 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 465333 00:06:26.712 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 465333 00:06:26.971 00:06:26.971 real 0m3.449s 00:06:26.971 user 0m3.640s 00:06:26.971 sys 0m1.302s 00:06:26.971 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.971 10:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.971 ************************************ 00:06:26.971 END TEST locking_app_on_unlocked_coremask 00:06:26.971 ************************************ 00:06:26.971 10:08:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.971 10:08:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.971 10:08:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.971 10:08:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.971 ************************************ 00:06:26.971 START TEST locking_app_on_locked_coremask 00:06:26.971 ************************************ 00:06:26.971 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:26.971 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=465901 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 465901 /var/tmp/spdk.sock 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 465901 ']' 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.972 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.972 [2024-12-12 10:08:40.600702] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:26.972 [2024-12-12 10:08:40.600767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465901 ] 00:06:27.231 [2024-12-12 10:08:40.687619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.231 [2024-12-12 10:08:40.727395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.490 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.490 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=465910 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 465910 /var/tmp/spdk2.sock 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 465910 /var/tmp/spdk2.sock 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 465910 /var/tmp/spdk2.sock 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 465910 ']' 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.491 10:08:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.491 [2024-12-12 10:08:40.976262] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:27.491 [2024-12-12 10:08:40.976324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465910 ] 00:06:27.491 [2024-12-12 10:08:41.070103] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 465901 has claimed it. 00:06:27.491 [2024-12-12 10:08:41.070145] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.059 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (465910) - No such process 00:06:28.059 ERROR: process (pid: 465910) is no longer running 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 465901 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 465901 00:06:28.059 10:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.996 lslocks: write error 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 465901 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 465901 ']' 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 465901 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465901 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465901' 00:06:28.996 killing process with pid 465901 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 465901 00:06:28.996 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 465901 00:06:29.256 00:06:29.256 real 0m2.067s 00:06:29.256 user 0m2.211s 00:06:29.256 sys 0m0.794s 00:06:29.256 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.256 10:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.256 ************************************ 00:06:29.256 END TEST locking_app_on_locked_coremask 00:06:29.256 ************************************ 00:06:29.256 10:08:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.256 10:08:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.256 10:08:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.256 10:08:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.256 ************************************ 00:06:29.256 START TEST locking_overlapped_coremask 00:06:29.256 ************************************ 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=466207 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 466207 /var/tmp/spdk.sock 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 466207 ']' 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.256 10:08:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.256 [2024-12-12 10:08:42.749197] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:29.256 [2024-12-12 10:08:42.749259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466207 ] 00:06:29.256 [2024-12-12 10:08:42.835260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.256 [2024-12-12 10:08:42.877909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.256 [2024-12-12 10:08:42.878031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.256 [2024-12-12 10:08:42.878032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=466377 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 466377 /var/tmp/spdk2.sock 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 466377 /var/tmp/spdk2.sock 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 466377 /var/tmp/spdk2.sock 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 466377 ']' 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.515 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.515 [2024-12-12 10:08:43.131313] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:29.515 [2024-12-12 10:08:43.131403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466377 ] 00:06:29.774 [2024-12-12 10:08:43.231987] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 466207 has claimed it. 00:06:29.774 [2024-12-12 10:08:43.232028] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.342 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (466377) - No such process 00:06:30.342 ERROR: process (pid: 466377) is no longer running 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 466207 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 466207 ']' 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 466207 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466207 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466207' 00:06:30.342 killing process with pid 466207 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 466207 00:06:30.342 10:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 466207 00:06:30.601 00:06:30.601 real 0m1.444s 00:06:30.601 user 0m4.000s 00:06:30.601 sys 0m0.436s 00:06:30.601 10:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.601 10:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.601 ************************************ 00:06:30.601 END TEST locking_overlapped_coremask 00:06:30.601 ************************************ 00:06:30.601 10:08:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:30.601 10:08:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.601 10:08:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.601 10:08:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.861 ************************************ 00:06:30.861 START TEST locking_overlapped_coremask_via_rpc 00:06:30.861 ************************************ 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=466514 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 466514 /var/tmp/spdk.sock 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 466514 ']' 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.861 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.861 [2024-12-12 10:08:44.276363] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:30.861 [2024-12-12 10:08:44.276422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466514 ] 00:06:30.861 [2024-12-12 10:08:44.356995] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.861 [2024-12-12 10:08:44.357024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.861 [2024-12-12 10:08:44.397683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.861 [2024-12-12 10:08:44.397794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.861 [2024-12-12 10:08:44.397796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=466658 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 466658 /var/tmp/spdk2.sock 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 466658 ']' 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.120 10:08:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.120 [2024-12-12 10:08:44.650312] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:31.120 [2024-12-12 10:08:44.650400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466658 ] 00:06:31.120 [2024-12-12 10:08:44.752826] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.120 [2024-12-12 10:08:44.752861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.380 [2024-12-12 10:08:44.835901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.380 [2024-12-12 10:08:44.839767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.380 [2024-12-12 10:08:44.839769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.957 [2024-12-12 10:08:45.532787] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 466514 has claimed it. 00:06:31.957 request: 00:06:31.957 { 00:06:31.957 "method": "framework_enable_cpumask_locks", 00:06:31.957 "req_id": 1 00:06:31.957 } 00:06:31.957 Got JSON-RPC error response 00:06:31.957 response: 00:06:31.957 { 00:06:31.957 "code": -32603, 00:06:31.957 "message": "Failed to claim CPU core: 2" 00:06:31.957 } 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 466514 /var/tmp/spdk.sock 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 466514 ']' 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.957 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 466658 /var/tmp/spdk2.sock 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 466658 ']' 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.216 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.476 00:06:32.476 real 0m1.694s 00:06:32.476 user 0m0.792s 00:06:32.476 sys 0m0.170s 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.476 10:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.476 ************************************ 00:06:32.476 END TEST locking_overlapped_coremask_via_rpc 00:06:32.476 ************************************ 00:06:32.476 10:08:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.476 10:08:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 466514 ]] 00:06:32.476 10:08:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 466514 00:06:32.476 10:08:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 466514 ']' 00:06:32.476 10:08:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 466514 00:06:32.476 10:08:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:32.476 10:08:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.476 10:08:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466514 00:06:32.476 10:08:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.476 10:08:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.476 10:08:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466514' 00:06:32.476 killing process with pid 466514 00:06:32.476 10:08:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 466514 00:06:32.476 10:08:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 466514 00:06:32.735 10:08:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 466658 ]] 00:06:32.735 10:08:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 466658 00:06:32.735 10:08:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 466658 ']' 00:06:32.735 10:08:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 466658 00:06:32.735 10:08:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:32.735 10:08:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.735 10:08:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466658 00:06:32.995 10:08:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:32.995 10:08:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:32.995 10:08:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466658' 00:06:32.995 killing process with pid 466658 00:06:32.995 10:08:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 466658 00:06:32.995 10:08:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 466658 00:06:33.254 10:08:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.254 10:08:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.254 10:08:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 466514 ]] 00:06:33.254 10:08:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 466514 00:06:33.254 10:08:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 466514 ']' 00:06:33.254 10:08:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 466514 00:06:33.254 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (466514) - No such process 00:06:33.254 10:08:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 466514 is not found' 00:06:33.254 Process with pid 466514 is not found 00:06:33.255 10:08:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 466658 ]] 00:06:33.255 10:08:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 466658 00:06:33.255 10:08:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 466658 ']' 00:06:33.255 10:08:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 466658 00:06:33.255 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (466658) - No such process 00:06:33.255 10:08:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 466658 is not found' 00:06:33.255 Process with pid 466658 is not found 00:06:33.255 10:08:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.255 00:06:33.255 real 0m16.072s 00:06:33.255 user 0m26.540s 00:06:33.255 sys 0m6.272s 00:06:33.255 10:08:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.255 10:08:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.255 ************************************ 00:06:33.255 END TEST cpu_locks 00:06:33.255 ************************************ 00:06:33.255 00:06:33.255 real 0m40.694s 00:06:33.255 user 1m14.522s 00:06:33.255 sys 0m10.709s 00:06:33.255 10:08:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.255 10:08:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.255 ************************************ 00:06:33.255 END TEST event 00:06:33.255 ************************************ 00:06:33.255 10:08:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:33.255 10:08:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.255 10:08:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.255 10:08:46 -- common/autotest_common.sh@10 -- # set +x 00:06:33.255 ************************************ 00:06:33.255 START TEST thread 00:06:33.255 ************************************ 00:06:33.255 10:08:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:33.515 * Looking for test storage... 00:06:33.515 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:33.515 10:08:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:33.515 10:08:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:33.515 10:08:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:33.515 10:08:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.515 10:08:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.515 10:08:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.515 10:08:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.515 10:08:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.515 10:08:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.515 10:08:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.515 10:08:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.515 10:08:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.515 10:08:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.515 10:08:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.515 10:08:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:33.515 10:08:47 thread -- scripts/common.sh@345 -- # : 1 00:06:33.515 10:08:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.515 10:08:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.515 10:08:47 thread -- scripts/common.sh@365 -- # decimal 1 00:06:33.515 10:08:47 thread -- scripts/common.sh@353 -- # local d=1 00:06:33.515 10:08:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.515 10:08:47 thread -- scripts/common.sh@355 -- # echo 1 00:06:33.515 10:08:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.515 10:08:47 thread -- scripts/common.sh@366 -- # decimal 2 00:06:33.515 10:08:47 thread -- scripts/common.sh@353 -- # local d=2 00:06:33.515 10:08:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.515 10:08:47 thread -- scripts/common.sh@355 -- # echo 2 00:06:33.515 10:08:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.515 10:08:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.515 10:08:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.515 10:08:47 thread -- scripts/common.sh@368 -- # return 0 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.515 --rc genhtml_branch_coverage=1 00:06:33.515 --rc genhtml_function_coverage=1 00:06:33.515 --rc genhtml_legend=1 00:06:33.515 --rc geninfo_all_blocks=1 00:06:33.515 --rc geninfo_unexecuted_blocks=1 00:06:33.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.515 ' 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.515 --rc genhtml_branch_coverage=1 00:06:33.515 --rc genhtml_function_coverage=1 00:06:33.515 --rc genhtml_legend=1 00:06:33.515 --rc geninfo_all_blocks=1 00:06:33.515 --rc geninfo_unexecuted_blocks=1 00:06:33.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.515 ' 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.515 --rc genhtml_branch_coverage=1 00:06:33.515 --rc genhtml_function_coverage=1 00:06:33.515 --rc genhtml_legend=1 00:06:33.515 --rc geninfo_all_blocks=1 00:06:33.515 --rc geninfo_unexecuted_blocks=1 00:06:33.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.515 ' 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.515 --rc genhtml_branch_coverage=1 00:06:33.515 --rc genhtml_function_coverage=1 00:06:33.515 --rc genhtml_legend=1 00:06:33.515 --rc geninfo_all_blocks=1 00:06:33.515 --rc geninfo_unexecuted_blocks=1 00:06:33.515 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:33.515 ' 00:06:33.515 10:08:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.515 10:08:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.515 ************************************ 00:06:33.515 START TEST thread_poller_perf 00:06:33.515 ************************************ 00:06:33.515 10:08:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.515 [2024-12-12 10:08:47.103871] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:33.515 [2024-12-12 10:08:47.103949] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467152 ] 00:06:33.774 [2024-12-12 10:08:47.189947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.774 [2024-12-12 10:08:47.229329] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.774 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.712 [2024-12-12T09:08:48.349Z] ====================================== 00:06:34.712 [2024-12-12T09:08:48.349Z] busy:2504236580 (cyc) 00:06:34.712 [2024-12-12T09:08:48.349Z] total_run_count: 856000 00:06:34.712 [2024-12-12T09:08:48.349Z] tsc_hz: 2500000000 (cyc) 00:06:34.712 [2024-12-12T09:08:48.349Z] ====================================== 00:06:34.712 [2024-12-12T09:08:48.349Z] poller_cost: 2925 (cyc), 1170 (nsec) 00:06:34.712 00:06:34.712 real 0m1.184s 00:06:34.712 user 0m1.094s 00:06:34.712 sys 0m0.087s 00:06:34.712 10:08:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.712 10:08:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.712 ************************************ 00:06:34.712 END TEST thread_poller_perf 00:06:34.712 ************************************ 00:06:34.712 10:08:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.712 10:08:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:34.712 10:08:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.712 10:08:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.972 ************************************ 00:06:34.972 START TEST thread_poller_perf 00:06:34.972 ************************************ 00:06:34.972 10:08:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.972 [2024-12-12 10:08:48.370763] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:34.972 [2024-12-12 10:08:48.370850] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467432 ] 00:06:34.972 [2024-12-12 10:08:48.460052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.972 [2024-12-12 10:08:48.499369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.972 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.909 [2024-12-12T09:08:49.546Z] ====================================== 00:06:35.909 [2024-12-12T09:08:49.546Z] busy:2501381156 (cyc) 00:06:35.909 [2024-12-12T09:08:49.546Z] total_run_count: 12113000 00:06:35.909 [2024-12-12T09:08:49.546Z] tsc_hz: 2500000000 (cyc) 00:06:35.909 [2024-12-12T09:08:49.546Z] ====================================== 00:06:35.909 [2024-12-12T09:08:49.546Z] poller_cost: 206 (cyc), 82 (nsec) 00:06:35.909 00:06:35.909 real 0m1.182s 00:06:35.909 user 0m1.092s 00:06:35.909 sys 0m0.085s 00:06:35.909 10:08:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.909 10:08:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.909 ************************************ 00:06:35.909 END TEST thread_poller_perf 00:06:35.909 ************************************ 00:06:36.169 10:08:49 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:36.169 10:08:49 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:36.169 10:08:49 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.169 10:08:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.169 10:08:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.169 ************************************ 00:06:36.169 START TEST thread_spdk_lock 00:06:36.169 ************************************ 00:06:36.169 10:08:49 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:36.169 [2024-12-12 10:08:49.640628] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:36.169 [2024-12-12 10:08:49.640756] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467718 ] 00:06:36.169 [2024-12-12 10:08:49.729187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.169 [2024-12-12 10:08:49.769228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.169 [2024-12-12 10:08:49.769228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.737 [2024-12-12 10:08:50.258058] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 990:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.737 [2024-12-12 10:08:50.258095] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3214:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:36.737 [2024-12-12 10:08:50.258109] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3169:sspin_stacks_print: *ERROR*: spinlock 0x14e4380 00:06:36.737 [2024-12-12 10:08:50.258840] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.737 [2024-12-12 10:08:50.258953] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1051:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.737 [2024-12-12 10:08:50.258972] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 885:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:36.737 Starting test contend 00:06:36.737 Worker Delay Wait us Hold us Total us 00:06:36.737 0 3 174173 185572 359746 00:06:36.737 1 5 85772 287241 373014 00:06:36.737 PASS test contend 00:06:36.737 Starting test hold_by_poller 00:06:36.737 PASS test hold_by_poller 00:06:36.737 Starting test hold_by_message 00:06:36.737 PASS test hold_by_message 00:06:36.737 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:36.737 100014 assertions passed 00:06:36.737 0 assertions failed 00:06:36.737 00:06:36.737 real 0m0.675s 00:06:36.737 user 0m1.064s 00:06:36.737 sys 0m0.097s 00:06:36.737 10:08:50 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.737 10:08:50 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:36.737 ************************************ 00:06:36.737 END TEST thread_spdk_lock 00:06:36.737 ************************************ 00:06:36.737 00:06:36.737 real 0m3.485s 00:06:36.737 user 0m3.445s 00:06:36.737 sys 0m0.553s 00:06:36.737 10:08:50 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.737 10:08:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.737 ************************************ 00:06:36.737 END TEST thread 00:06:36.737 ************************************ 00:06:36.997 10:08:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:36.997 10:08:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.997 10:08:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.997 10:08:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.997 10:08:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.997 ************************************ 00:06:36.997 START TEST app_cmdline 00:06:36.997 ************************************ 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.997 * Looking for test storage... 00:06:36.997 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.997 10:08:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.997 --rc genhtml_branch_coverage=1 00:06:36.997 --rc genhtml_function_coverage=1 00:06:36.997 --rc genhtml_legend=1 00:06:36.997 --rc geninfo_all_blocks=1 00:06:36.997 --rc geninfo_unexecuted_blocks=1 00:06:36.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:36.997 ' 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.997 --rc genhtml_branch_coverage=1 00:06:36.997 --rc genhtml_function_coverage=1 00:06:36.997 --rc genhtml_legend=1 00:06:36.997 --rc geninfo_all_blocks=1 00:06:36.997 --rc geninfo_unexecuted_blocks=1 00:06:36.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:36.997 ' 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.997 --rc genhtml_branch_coverage=1 00:06:36.997 --rc genhtml_function_coverage=1 00:06:36.997 --rc genhtml_legend=1 00:06:36.997 --rc geninfo_all_blocks=1 00:06:36.997 --rc geninfo_unexecuted_blocks=1 00:06:36.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:36.997 ' 00:06:36.997 10:08:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.997 --rc genhtml_branch_coverage=1 00:06:36.997 --rc genhtml_function_coverage=1 00:06:36.997 --rc genhtml_legend=1 00:06:36.997 --rc geninfo_all_blocks=1 00:06:36.997 --rc geninfo_unexecuted_blocks=1 00:06:36.997 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:36.997 ' 00:06:36.997 10:08:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.997 10:08:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=467854 00:06:36.997 10:08:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 467854 00:06:36.997 10:08:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.998 10:08:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 467854 ']' 00:06:36.998 10:08:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.998 10:08:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.998 10:08:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.998 10:08:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.998 10:08:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.257 [2024-12-12 10:08:50.644463] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:37.257 [2024-12-12 10:08:50.644527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467854 ] 00:06:37.257 [2024-12-12 10:08:50.729810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.257 [2024-12-12 10:08:50.772562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.516 10:08:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.516 10:08:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:37.516 10:08:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:37.775 { 00:06:37.775 "version": "SPDK v25.01-pre git sha1 44c641464", 00:06:37.775 "fields": { 00:06:37.775 "major": 25, 00:06:37.775 "minor": 1, 00:06:37.775 "patch": 0, 00:06:37.775 "suffix": "-pre", 00:06:37.775 "commit": "44c641464" 00:06:37.775 } 00:06:37.775 } 00:06:37.775 10:08:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.775 10:08:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.776 10:08:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:37.776 10:08:51 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.035 request: 00:06:38.035 { 00:06:38.035 "method": "env_dpdk_get_mem_stats", 00:06:38.035 "req_id": 1 00:06:38.035 } 00:06:38.035 Got JSON-RPC error response 00:06:38.035 response: 00:06:38.035 { 00:06:38.035 "code": -32601, 00:06:38.035 "message": "Method not found" 00:06:38.035 } 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.035 10:08:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 467854 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 467854 ']' 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 467854 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467854 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467854' 00:06:38.035 killing process with pid 467854 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 467854 00:06:38.035 10:08:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 467854 00:06:38.295 00:06:38.295 real 0m1.376s 00:06:38.295 user 0m1.562s 00:06:38.295 sys 0m0.521s 00:06:38.295 10:08:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.295 10:08:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 ************************************ 00:06:38.295 END TEST app_cmdline 00:06:38.295 ************************************ 00:06:38.295 10:08:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:38.295 10:08:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.295 10:08:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.295 10:08:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 ************************************ 00:06:38.295 START TEST version 00:06:38.295 ************************************ 00:06:38.295 10:08:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:38.555 * Looking for test storage... 00:06:38.555 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:38.555 10:08:51 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.555 10:08:51 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.555 10:08:51 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.555 10:08:52 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.555 10:08:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.555 10:08:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.555 10:08:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.555 10:08:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.555 10:08:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.555 10:08:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.555 10:08:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.555 10:08:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.555 10:08:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.555 10:08:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.555 10:08:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.555 10:08:52 version -- scripts/common.sh@344 -- # case "$op" in 00:06:38.555 10:08:52 version -- scripts/common.sh@345 -- # : 1 00:06:38.555 10:08:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.555 10:08:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.555 10:08:52 version -- scripts/common.sh@365 -- # decimal 1 00:06:38.555 10:08:52 version -- scripts/common.sh@353 -- # local d=1 00:06:38.555 10:08:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.555 10:08:52 version -- scripts/common.sh@355 -- # echo 1 00:06:38.555 10:08:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.555 10:08:52 version -- scripts/common.sh@366 -- # decimal 2 00:06:38.555 10:08:52 version -- scripts/common.sh@353 -- # local d=2 00:06:38.555 10:08:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.555 10:08:52 version -- scripts/common.sh@355 -- # echo 2 00:06:38.555 10:08:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.555 10:08:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.555 10:08:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.555 10:08:52 version -- scripts/common.sh@368 -- # return 0 00:06:38.555 10:08:52 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.555 10:08:52 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.555 --rc genhtml_branch_coverage=1 00:06:38.555 --rc genhtml_function_coverage=1 00:06:38.555 --rc genhtml_legend=1 00:06:38.555 --rc geninfo_all_blocks=1 00:06:38.555 --rc geninfo_unexecuted_blocks=1 00:06:38.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.555 ' 00:06:38.555 10:08:52 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.555 --rc genhtml_branch_coverage=1 00:06:38.555 --rc genhtml_function_coverage=1 00:06:38.555 --rc genhtml_legend=1 00:06:38.555 --rc geninfo_all_blocks=1 00:06:38.555 --rc geninfo_unexecuted_blocks=1 00:06:38.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.555 ' 00:06:38.555 10:08:52 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.555 --rc genhtml_branch_coverage=1 00:06:38.555 --rc genhtml_function_coverage=1 00:06:38.555 --rc genhtml_legend=1 00:06:38.555 --rc geninfo_all_blocks=1 00:06:38.555 --rc geninfo_unexecuted_blocks=1 00:06:38.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.555 ' 00:06:38.555 10:08:52 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.555 --rc genhtml_branch_coverage=1 00:06:38.555 --rc genhtml_function_coverage=1 00:06:38.555 --rc genhtml_legend=1 00:06:38.555 --rc geninfo_all_blocks=1 00:06:38.555 --rc geninfo_unexecuted_blocks=1 00:06:38.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:38.555 ' 00:06:38.555 10:08:52 version -- app/version.sh@17 -- # get_header_version major 00:06:38.555 10:08:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.555 10:08:52 version -- app/version.sh@14 -- # cut -f2 00:06:38.555 10:08:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.555 10:08:52 version -- app/version.sh@17 -- # major=25 00:06:38.555 10:08:52 version -- app/version.sh@18 -- # get_header_version minor 00:06:38.555 10:08:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.555 10:08:52 version -- app/version.sh@14 -- # cut -f2 00:06:38.555 10:08:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.555 10:08:52 version -- app/version.sh@18 -- # minor=1 00:06:38.555 10:08:52 version -- app/version.sh@19 -- # get_header_version patch 00:06:38.555 10:08:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.556 10:08:52 version -- app/version.sh@14 -- # cut -f2 00:06:38.556 10:08:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.556 10:08:52 version -- app/version.sh@19 -- # patch=0 00:06:38.556 10:08:52 version -- app/version.sh@20 -- # get_header_version suffix 00:06:38.556 10:08:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:38.556 10:08:52 version -- app/version.sh@14 -- # cut -f2 00:06:38.556 10:08:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.556 10:08:52 version -- app/version.sh@20 -- # suffix=-pre 00:06:38.556 10:08:52 version -- app/version.sh@22 -- # version=25.1 00:06:38.556 10:08:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.556 10:08:52 version -- app/version.sh@28 -- # version=25.1rc0 00:06:38.556 10:08:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:38.556 10:08:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.556 10:08:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:38.556 10:08:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:38.556 00:06:38.556 real 0m0.269s 00:06:38.556 user 0m0.169s 00:06:38.556 sys 0m0.153s 00:06:38.556 10:08:52 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.556 10:08:52 version -- common/autotest_common.sh@10 -- # set +x 00:06:38.556 ************************************ 00:06:38.556 END TEST version 00:06:38.556 ************************************ 00:06:38.556 10:08:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:38.556 10:08:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:38.556 10:08:52 -- spdk/autotest.sh@194 -- # uname -s 00:06:38.815 10:08:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:38.815 10:08:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.815 10:08:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.815 10:08:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:38.815 10:08:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.815 10:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 10:08:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:06:38.815 10:08:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:06:38.815 10:08:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:06:38.815 10:08:52 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:06:38.815 10:08:52 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:38.815 10:08:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.815 10:08:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.815 10:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.815 ************************************ 00:06:38.815 START TEST llvm_fuzz 00:06:38.815 ************************************ 00:06:38.815 10:08:52 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:38.815 * Looking for test storage... 00:06:38.815 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:38.815 10:08:52 llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.815 10:08:52 llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.815 10:08:52 llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.075 10:08:52 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.075 --rc genhtml_branch_coverage=1 00:06:39.075 --rc genhtml_function_coverage=1 00:06:39.075 --rc genhtml_legend=1 00:06:39.075 --rc geninfo_all_blocks=1 00:06:39.075 --rc geninfo_unexecuted_blocks=1 00:06:39.075 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.075 ' 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.075 --rc genhtml_branch_coverage=1 00:06:39.075 --rc genhtml_function_coverage=1 00:06:39.075 --rc genhtml_legend=1 00:06:39.075 --rc geninfo_all_blocks=1 00:06:39.075 --rc geninfo_unexecuted_blocks=1 00:06:39.075 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.075 ' 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.075 --rc genhtml_branch_coverage=1 00:06:39.075 --rc genhtml_function_coverage=1 00:06:39.075 --rc genhtml_legend=1 00:06:39.075 --rc geninfo_all_blocks=1 00:06:39.075 --rc geninfo_unexecuted_blocks=1 00:06:39.075 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.075 ' 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.075 --rc genhtml_branch_coverage=1 00:06:39.075 --rc genhtml_function_coverage=1 00:06:39.075 --rc genhtml_legend=1 00:06:39.075 --rc geninfo_all_blocks=1 00:06:39.075 --rc geninfo_unexecuted_blocks=1 00:06:39.075 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.075 ' 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:39.075 10:08:52 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:06:39.075 10:08:52 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:06:39.076 10:08:52 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:39.076 10:08:52 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.076 10:08:52 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.076 10:08:52 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:39.076 ************************************ 00:06:39.076 START TEST nvmf_llvm_fuzz 00:06:39.076 ************************************ 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:39.076 * Looking for test storage... 00:06:39.076 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.076 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.338 --rc genhtml_branch_coverage=1 00:06:39.338 --rc genhtml_function_coverage=1 00:06:39.338 --rc genhtml_legend=1 00:06:39.338 --rc geninfo_all_blocks=1 00:06:39.338 --rc geninfo_unexecuted_blocks=1 00:06:39.338 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.338 ' 00:06:39.338 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.338 --rc genhtml_branch_coverage=1 00:06:39.338 --rc genhtml_function_coverage=1 00:06:39.338 --rc genhtml_legend=1 00:06:39.339 --rc geninfo_all_blocks=1 00:06:39.339 --rc geninfo_unexecuted_blocks=1 00:06:39.339 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.339 ' 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.339 --rc genhtml_branch_coverage=1 00:06:39.339 --rc genhtml_function_coverage=1 00:06:39.339 --rc genhtml_legend=1 00:06:39.339 --rc geninfo_all_blocks=1 00:06:39.339 --rc geninfo_unexecuted_blocks=1 00:06:39.339 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.339 ' 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.339 --rc genhtml_branch_coverage=1 00:06:39.339 --rc genhtml_function_coverage=1 00:06:39.339 --rc genhtml_legend=1 00:06:39.339 --rc geninfo_all_blocks=1 00:06:39.339 --rc geninfo_unexecuted_blocks=1 00:06:39.339 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.339 ' 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:39.339 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:39.340 #define SPDK_CONFIG_H 00:06:39.340 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:39.340 #define SPDK_CONFIG_APPS 1 00:06:39.340 #define SPDK_CONFIG_ARCH native 00:06:39.340 #undef SPDK_CONFIG_ASAN 00:06:39.340 #undef SPDK_CONFIG_AVAHI 00:06:39.340 #undef SPDK_CONFIG_CET 00:06:39.340 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:39.340 #define SPDK_CONFIG_COVERAGE 1 00:06:39.340 #define SPDK_CONFIG_CROSS_PREFIX 00:06:39.340 #undef SPDK_CONFIG_CRYPTO 00:06:39.340 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:39.340 #undef SPDK_CONFIG_CUSTOMOCF 00:06:39.340 #undef SPDK_CONFIG_DAOS 00:06:39.340 #define SPDK_CONFIG_DAOS_DIR 00:06:39.340 #define SPDK_CONFIG_DEBUG 1 00:06:39.340 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:39.340 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:39.340 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:39.340 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:39.340 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:39.340 #undef SPDK_CONFIG_DPDK_UADK 00:06:39.340 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:39.340 #define SPDK_CONFIG_EXAMPLES 1 00:06:39.340 #undef SPDK_CONFIG_FC 00:06:39.340 #define SPDK_CONFIG_FC_PATH 00:06:39.340 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:39.340 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:39.340 #define SPDK_CONFIG_FSDEV 1 00:06:39.340 #undef SPDK_CONFIG_FUSE 00:06:39.340 #define SPDK_CONFIG_FUZZER 1 00:06:39.340 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:39.340 #undef SPDK_CONFIG_GOLANG 00:06:39.340 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:39.340 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:39.340 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:39.340 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:39.340 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:39.340 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:39.340 #undef SPDK_CONFIG_HAVE_LZ4 00:06:39.340 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:39.340 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:39.340 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:39.340 #define SPDK_CONFIG_IDXD 1 00:06:39.340 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:39.340 #undef SPDK_CONFIG_IPSEC_MB 00:06:39.340 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:39.340 #define SPDK_CONFIG_ISAL 1 00:06:39.340 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:39.340 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:39.340 #define SPDK_CONFIG_LIBDIR 00:06:39.340 #undef SPDK_CONFIG_LTO 00:06:39.340 #define SPDK_CONFIG_MAX_LCORES 128 00:06:39.340 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:06:39.340 #define SPDK_CONFIG_NVME_CUSE 1 00:06:39.340 #undef SPDK_CONFIG_OCF 00:06:39.340 #define SPDK_CONFIG_OCF_PATH 00:06:39.340 #define SPDK_CONFIG_OPENSSL_PATH 00:06:39.340 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:39.340 #define SPDK_CONFIG_PGO_DIR 00:06:39.340 #undef SPDK_CONFIG_PGO_USE 00:06:39.340 #define SPDK_CONFIG_PREFIX /usr/local 00:06:39.340 #undef SPDK_CONFIG_RAID5F 00:06:39.340 #undef SPDK_CONFIG_RBD 00:06:39.340 #define SPDK_CONFIG_RDMA 1 00:06:39.340 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:39.340 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:39.340 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:39.340 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:39.340 #undef SPDK_CONFIG_SHARED 00:06:39.340 #undef SPDK_CONFIG_SMA 00:06:39.340 #define SPDK_CONFIG_TESTS 1 00:06:39.340 #undef SPDK_CONFIG_TSAN 00:06:39.340 #define SPDK_CONFIG_UBLK 1 00:06:39.340 #define SPDK_CONFIG_UBSAN 1 00:06:39.340 #undef SPDK_CONFIG_UNIT_TESTS 00:06:39.340 #undef SPDK_CONFIG_URING 00:06:39.340 #define SPDK_CONFIG_URING_PATH 00:06:39.340 #undef SPDK_CONFIG_URING_ZNS 00:06:39.340 #undef SPDK_CONFIG_USDT 00:06:39.340 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:39.340 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:39.340 #define SPDK_CONFIG_VFIO_USER 1 00:06:39.340 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:39.340 #define SPDK_CONFIG_VHOST 1 00:06:39.340 #define SPDK_CONFIG_VIRTIO 1 00:06:39.340 #undef SPDK_CONFIG_VTUNE 00:06:39.340 #define SPDK_CONFIG_VTUNE_DIR 00:06:39.340 #define SPDK_CONFIG_WERROR 1 00:06:39.340 #define SPDK_CONFIG_WPDK_DIR 00:06:39.340 #undef SPDK_CONFIG_XNVME 00:06:39.340 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:39.340 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.341 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 468491 ]] 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 468491 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.X50dX5 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.X50dX5/tests/nvmf /tmp/spdk.X50dX5 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=785162240 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4499267584 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=54106705920 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730619392 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7623913472 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.342 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30861881344 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865309696 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340125696 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346126336 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6000640 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30865125376 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865309696 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=184320 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173048832 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173061120 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:06:39.343 * Looking for test storage... 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=54106705920 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9838505984 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.343 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.343 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:39.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.603 --rc genhtml_branch_coverage=1 00:06:39.603 --rc genhtml_function_coverage=1 00:06:39.603 --rc genhtml_legend=1 00:06:39.603 --rc geninfo_all_blocks=1 00:06:39.603 --rc geninfo_unexecuted_blocks=1 00:06:39.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.603 ' 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:39.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.603 --rc genhtml_branch_coverage=1 00:06:39.603 --rc genhtml_function_coverage=1 00:06:39.603 --rc genhtml_legend=1 00:06:39.603 --rc geninfo_all_blocks=1 00:06:39.603 --rc geninfo_unexecuted_blocks=1 00:06:39.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.603 ' 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:39.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.603 --rc genhtml_branch_coverage=1 00:06:39.603 --rc genhtml_function_coverage=1 00:06:39.603 --rc genhtml_legend=1 00:06:39.603 --rc geninfo_all_blocks=1 00:06:39.603 --rc geninfo_unexecuted_blocks=1 00:06:39.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.603 ' 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:39.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.603 --rc genhtml_branch_coverage=1 00:06:39.603 --rc genhtml_function_coverage=1 00:06:39.603 --rc genhtml_legend=1 00:06:39.603 --rc geninfo_all_blocks=1 00:06:39.603 --rc geninfo_unexecuted_blocks=1 00:06:39.603 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:39.603 ' 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:39.603 10:08:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.603 10:08:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:39.603 [2024-12-12 10:08:53.046823] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:39.603 [2024-12-12 10:08:53.046900] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468550 ] 00:06:39.863 [2024-12-12 10:08:53.329693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.863 [2024-12-12 10:08:53.390475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.863 [2024-12-12 10:08:53.449430] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.863 [2024-12-12 10:08:53.465778] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:39.863 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.863 INFO: Seed: 1386286247 00:06:40.122 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:40.122 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:40.122 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:40.122 INFO: A corpus is not provided, starting from an empty corpus 00:06:40.122 #2 INITED exec/s: 0 rss: 65Mb 00:06:40.122 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:40.122 This may also happen if the target rejected all inputs we tried so far 00:06:40.122 [2024-12-12 10:08:53.521130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.122 [2024-12-12 10:08:53.521159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.381 NEW_FUNC[1/717]: 0x43bbe8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:40.381 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:40.381 #10 NEW cov: 12136 ft: 12133 corp: 2/95b lim: 320 exec/s: 0 rss: 73Mb L: 94/94 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:40.381 [2024-12-12 10:08:53.852227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.381 [2024-12-12 10:08:53.852283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.381 #13 NEW cov: 12275 ft: 12887 corp: 3/191b lim: 320 exec/s: 0 rss: 73Mb L: 96/96 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:06:40.381 [2024-12-12 10:08:53.902085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.381 [2024-12-12 10:08:53.902116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.381 #19 NEW cov: 12281 ft: 13057 corp: 4/287b lim: 320 exec/s: 0 rss: 73Mb L: 96/96 MS: 1 ChangeByte- 00:06:40.381 [2024-12-12 10:08:53.962216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b00 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.381 [2024-12-12 10:08:53.962242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.381 #20 NEW cov: 12366 ft: 13371 corp: 5/385b lim: 320 exec/s: 0 rss: 73Mb L: 98/98 MS: 1 CMP- DE: "\013\000\000\000"- 00:06:40.640 [2024-12-12 10:08:54.022572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0xdededede00000000 00:06:40.640 [2024-12-12 10:08:54.022598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.640 [2024-12-12 10:08:54.022653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.640 [2024-12-12 10:08:54.022668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.640 #21 NEW cov: 12378 ft: 13659 corp: 6/515b lim: 320 exec/s: 0 rss: 73Mb L: 130/130 MS: 1 InsertRepeatedBytes- 00:06:40.640 [2024-12-12 10:08:54.082543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.640 [2024-12-12 10:08:54.082568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.640 #22 NEW cov: 12378 ft: 13775 corp: 7/609b lim: 320 exec/s: 0 rss: 73Mb L: 94/130 MS: 1 ChangeBinInt- 00:06:40.640 [2024-12-12 10:08:54.122667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.640 [2024-12-12 10:08:54.122693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.640 #28 NEW cov: 12378 ft: 13839 corp: 8/691b lim: 320 exec/s: 0 rss: 73Mb L: 82/130 MS: 1 CrossOver- 00:06:40.640 [2024-12-12 10:08:54.162935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0xdededede00000000 00:06:40.640 [2024-12-12 10:08:54.162961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.640 [2024-12-12 10:08:54.163017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000008 00:06:40.640 [2024-12-12 10:08:54.163032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.640 #29 NEW cov: 12378 ft: 13872 corp: 9/821b lim: 320 exec/s: 0 rss: 73Mb L: 130/130 MS: 1 ChangeBit- 00:06:40.641 [2024-12-12 10:08:54.223103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0xdededede00000000 00:06:40.641 [2024-12-12 10:08:54.223128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.641 [2024-12-12 10:08:54.223184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000008 00:06:40.641 [2024-12-12 10:08:54.223197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.641 #30 NEW cov: 12378 ft: 13963 corp: 10/951b lim: 320 exec/s: 0 rss: 73Mb L: 130/130 MS: 1 ChangeBinInt- 00:06:40.900 [2024-12-12 10:08:54.283157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.900 [2024-12-12 10:08:54.283182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.900 #31 NEW cov: 12378 ft: 13992 corp: 11/1047b lim: 320 exec/s: 0 rss: 73Mb L: 96/130 MS: 1 ChangeBit- 00:06:40.900 [2024-12-12 10:08:54.323211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.900 [2024-12-12 10:08:54.323236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.900 #32 NEW cov: 12378 ft: 14012 corp: 12/1142b lim: 320 exec/s: 0 rss: 73Mb L: 95/130 MS: 1 InsertByte- 00:06:40.900 [2024-12-12 10:08:54.363362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.900 [2024-12-12 10:08:54.363386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.900 #33 NEW cov: 12378 ft: 14072 corp: 13/1238b lim: 320 exec/s: 0 rss: 73Mb L: 96/130 MS: 1 ShuffleBytes- 00:06:40.900 [2024-12-12 10:08:54.403580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b00 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.900 [2024-12-12 10:08:54.403605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.900 [2024-12-12 10:08:54.403660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.900 [2024-12-12 10:08:54.403673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.900 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:40.900 #34 NEW cov: 12401 ft: 14202 corp: 14/1386b lim: 320 exec/s: 0 rss: 74Mb L: 148/148 MS: 1 CopyPart- 00:06:40.900 [2024-12-12 10:08:54.463839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.900 [2024-12-12 10:08:54.463864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.900 [2024-12-12 10:08:54.463919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.900 [2024-12-12 10:08:54.463933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.900 [2024-12-12 10:08:54.463985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.900 [2024-12-12 10:08:54.463999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.900 #35 NEW cov: 12401 ft: 14387 corp: 15/1594b lim: 320 exec/s: 0 rss: 74Mb L: 208/208 MS: 1 InsertRepeatedBytes- 00:06:40.900 [2024-12-12 10:08:54.523810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.900 [2024-12-12 10:08:54.523835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.159 #41 NEW cov: 12401 ft: 14399 corp: 16/1692b lim: 320 exec/s: 41 rss: 74Mb L: 98/208 MS: 1 CMP- DE: "\001\000"- 00:06:41.159 [2024-12-12 10:08:54.584047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:ffffff00 cdw10:ffffffff cdw11:ffffffff 00:06:41.159 [2024-12-12 10:08:54.584072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.159 [2024-12-12 10:08:54.584137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:41.159 [2024-12-12 10:08:54.584150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.159 #42 NEW cov: 12408 ft: 14414 corp: 17/1866b lim: 320 exec/s: 42 rss: 74Mb L: 174/208 MS: 1 InsertRepeatedBytes- 00:06:41.159 [2024-12-12 10:08:54.644445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0xdededede00000000 00:06:41.159 [2024-12-12 10:08:54.644469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.159 [2024-12-12 10:08:54.644524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000008 00:06:41.159 [2024-12-12 10:08:54.644538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.159 [2024-12-12 10:08:54.644593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.159 [2024-12-12 10:08:54.644606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.159 #43 NEW cov: 12408 ft: 14486 corp: 18/2066b lim: 320 exec/s: 43 rss: 74Mb L: 200/208 MS: 1 InsertRepeatedBytes- 00:06:41.159 [2024-12-12 10:08:54.704297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.159 [2024-12-12 10:08:54.704322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.159 #44 NEW cov: 12408 ft: 14495 corp: 19/2149b lim: 320 exec/s: 44 rss: 74Mb L: 83/208 MS: 1 InsertByte- 00:06:41.159 [2024-12-12 10:08:54.744419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.159 [2024-12-12 10:08:54.744444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.159 #45 NEW cov: 12408 ft: 14505 corp: 20/2243b lim: 320 exec/s: 45 rss: 74Mb L: 94/208 MS: 1 ChangeByte- 00:06:41.418 [2024-12-12 10:08:54.804618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.418 [2024-12-12 10:08:54.804643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.418 #46 NEW cov: 12408 ft: 14535 corp: 21/2309b lim: 320 exec/s: 46 rss: 74Mb L: 66/208 MS: 1 EraseBytes- 00:06:41.418 [2024-12-12 10:08:54.844849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b00 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.418 [2024-12-12 10:08:54.844881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.418 [2024-12-12 10:08:54.844936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.418 [2024-12-12 10:08:54.844949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.418 #47 NEW cov: 12408 ft: 14541 corp: 22/2448b lim: 320 exec/s: 47 rss: 74Mb L: 139/208 MS: 1 CopyPart- 00:06:41.418 [2024-12-12 10:08:54.884848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.418 [2024-12-12 10:08:54.884872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.418 #48 NEW cov: 12408 ft: 14557 corp: 23/2531b lim: 320 exec/s: 48 rss: 74Mb L: 83/208 MS: 1 CMP- DE: ")?"- 00:06:41.418 [2024-12-12 10:08:54.945000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b00 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.418 [2024-12-12 10:08:54.945025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.418 #49 NEW cov: 12408 ft: 14644 corp: 24/2629b lim: 320 exec/s: 49 rss: 74Mb L: 98/208 MS: 1 ShuffleBytes- 00:06:41.418 [2024-12-12 10:08:54.985360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0xdededede00000000 00:06:41.418 [2024-12-12 10:08:54.985385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.418 [2024-12-12 10:08:54.985437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000008 00:06:41.418 [2024-12-12 10:08:54.985450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.418 [2024-12-12 10:08:54.985520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000022 cdw11:00000000 00:06:41.418 [2024-12-12 10:08:54.985534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.418 #50 NEW cov: 12408 ft: 14654 corp: 25/2830b lim: 320 exec/s: 50 rss: 74Mb L: 201/208 MS: 1 InsertByte- 00:06:41.418 [2024-12-12 10:08:55.045353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.418 [2024-12-12 10:08:55.045378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.677 #51 NEW cov: 12408 ft: 14668 corp: 26/2897b lim: 320 exec/s: 51 rss: 75Mb L: 67/208 MS: 1 InsertByte- 00:06:41.677 [2024-12-12 10:08:55.105550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b00 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.677 [2024-12-12 10:08:55.105576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.677 [2024-12-12 10:08:55.105632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.677 [2024-12-12 10:08:55.105645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.677 #52 NEW cov: 12408 ft: 14696 corp: 27/3040b lim: 320 exec/s: 52 rss: 75Mb L: 143/208 MS: 1 PersAutoDict- DE: "\013\000\000\000"- 00:06:41.677 [2024-12-12 10:08:55.165650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.677 [2024-12-12 10:08:55.165677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.677 #53 NEW cov: 12408 ft: 14722 corp: 28/3142b lim: 320 exec/s: 53 rss: 75Mb L: 102/208 MS: 1 PersAutoDict- DE: "\013\000\000\000"- 00:06:41.677 [2024-12-12 10:08:55.225799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.677 [2024-12-12 10:08:55.225825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.677 #54 NEW cov: 12408 ft: 14744 corp: 29/3226b lim: 320 exec/s: 54 rss: 75Mb L: 84/208 MS: 1 PersAutoDict- DE: ")?"- 00:06:41.677 [2024-12-12 10:08:55.265855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.677 [2024-12-12 10:08:55.265880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.677 #55 NEW cov: 12408 ft: 14762 corp: 30/3320b lim: 320 exec/s: 55 rss: 75Mb L: 94/208 MS: 1 CMP- DE: "\377\001\000\000"- 00:06:41.935 [2024-12-12 10:08:55.326286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.935 [2024-12-12 10:08:55.326312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.935 [2024-12-12 10:08:55.326368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.935 [2024-12-12 10:08:55.326386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.935 [2024-12-12 10:08:55.326442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.936 [2024-12-12 10:08:55.326455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.936 #56 NEW cov: 12408 ft: 14775 corp: 31/3528b lim: 320 exec/s: 56 rss: 75Mb L: 208/208 MS: 1 CMP- DE: "\011\000"- 00:06:41.936 [2024-12-12 10:08:55.386614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b00 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.936 [2024-12-12 10:08:55.386639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.936 [2024-12-12 10:08:55.386693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.936 [2024-12-12 10:08:55.386706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.936 [2024-12-12 10:08:55.386764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.936 [2024-12-12 10:08:55.386778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.936 [2024-12-12 10:08:55.386831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.936 [2024-12-12 10:08:55.386844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.936 #57 NEW cov: 12408 ft: 14933 corp: 32/3787b lim: 320 exec/s: 57 rss: 75Mb L: 259/259 MS: 1 CopyPart- 00:06:41.936 [2024-12-12 10:08:55.446671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:dededede SGL TRANSPORT DATA BLOCK TRANSPORT 0xdededede00000000 00:06:41.936 [2024-12-12 10:08:55.446697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.936 [2024-12-12 10:08:55.446754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000008 00:06:41.936 [2024-12-12 10:08:55.446769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.936 [2024-12-12 10:08:55.446823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:41.936 [2024-12-12 10:08:55.446837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.936 #58 NEW cov: 12408 ft: 14976 corp: 33/3984b lim: 320 exec/s: 58 rss: 75Mb L: 197/259 MS: 1 EraseBytes- 00:06:41.936 [2024-12-12 10:08:55.486334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000b2 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc000000000000000 00:06:41.936 [2024-12-12 10:08:55.486359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.936 #62 NEW cov: 12408 ft: 15016 corp: 34/4050b lim: 320 exec/s: 31 rss: 75Mb L: 66/259 MS: 4 EraseBytes-InsertByte-InsertByte-InsertRepeatedBytes- 00:06:41.936 #62 DONE cov: 12408 ft: 15016 corp: 34/4050b lim: 320 exec/s: 31 rss: 75Mb 00:06:41.936 ###### Recommended dictionary. ###### 00:06:41.936 "\013\000\000\000" # Uses: 2 00:06:41.936 "\001\000" # Uses: 0 00:06:41.936 ")?" # Uses: 1 00:06:41.936 "\377\001\000\000" # Uses: 0 00:06:41.936 "\011\000" # Uses: 0 00:06:41.936 ###### End of recommended dictionary. ###### 00:06:41.936 Done 62 runs in 2 second(s) 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.195 10:08:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:42.195 [2024-12-12 10:08:55.661530] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:42.195 [2024-12-12 10:08:55.661607] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469084 ] 00:06:42.454 [2024-12-12 10:08:55.936442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.454 [2024-12-12 10:08:55.996238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.454 [2024-12-12 10:08:56.055048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.454 [2024-12-12 10:08:56.071379] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:42.454 INFO: Running with entropic power schedule (0xFF, 100). 00:06:42.454 INFO: Seed: 3991278028 00:06:42.714 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:42.714 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:42.714 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:42.714 INFO: A corpus is not provided, starting from an empty corpus 00:06:42.714 #2 INITED exec/s: 0 rss: 65Mb 00:06:42.714 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:42.714 This may also happen if the target rejected all inputs we tried so far 00:06:42.714 [2024-12-12 10:08:56.126622] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.714 [2024-12-12 10:08:56.126750] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.714 [2024-12-12 10:08:56.126860] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.714 [2024-12-12 10:08:56.127085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.714 [2024-12-12 10:08:56.127115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.714 [2024-12-12 10:08:56.127175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.714 [2024-12-12 10:08:56.127189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.714 [2024-12-12 10:08:56.127249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.714 [2024-12-12 10:08:56.127263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.973 NEW_FUNC[1/717]: 0x43c4e8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:42.973 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.973 #5 NEW cov: 12219 ft: 12216 corp: 2/24b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:06:42.973 [2024-12-12 10:08:56.457503] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.973 [2024-12-12 10:08:56.457631] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.973 [2024-12-12 10:08:56.457748] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.973 [2024-12-12 10:08:56.457979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.973 [2024-12-12 10:08:56.458020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.973 [2024-12-12 10:08:56.458091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.973 [2024-12-12 10:08:56.458112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.973 [2024-12-12 10:08:56.458179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.973 [2024-12-12 10:08:56.458199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.973 #6 NEW cov: 12332 ft: 12695 corp: 3/47b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBinInt- 00:06:42.973 [2024-12-12 10:08:56.517495] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.973 [2024-12-12 10:08:56.517612] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.974 [2024-12-12 10:08:56.517721] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.974 [2024-12-12 10:08:56.517926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.517951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.974 [2024-12-12 10:08:56.518006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.518019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.974 [2024-12-12 10:08:56.518072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.518085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.974 #7 NEW cov: 12338 ft: 12991 corp: 4/70b lim: 30 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 ChangeBit- 00:06:42.974 [2024-12-12 10:08:56.557519] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:42.974 [2024-12-12 10:08:56.557730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.557754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.974 #8 NEW cov: 12429 ft: 13699 corp: 5/77b lim: 30 exec/s: 0 rss: 73Mb L: 7/23 MS: 1 InsertRepeatedBytes- 00:06:42.974 [2024-12-12 10:08:56.597773] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.974 [2024-12-12 10:08:56.597890] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:42.974 [2024-12-12 10:08:56.597990] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119264) > buf size (4096) 00:06:42.974 [2024-12-12 10:08:56.598092] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x740a 00:06:42.974 [2024-12-12 10:08:56.598298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.598324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.974 [2024-12-12 10:08:56.598377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.598390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.974 [2024-12-12 10:08:56.598444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74770074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.598458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.974 [2024-12-12 10:08:56.598511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.974 [2024-12-12 10:08:56.598525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.233 #9 NEW cov: 12429 ft: 14237 corp: 6/103b lim: 30 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:06:43.233 [2024-12-12 10:08:56.657913] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.233 [2024-12-12 10:08:56.658029] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.233 [2024-12-12 10:08:56.658132] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.233 [2024-12-12 10:08:56.658234] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (643540) > buf size (4096) 00:06:43.233 [2024-12-12 10:08:56.658445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.658470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.233 [2024-12-12 10:08:56.658527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.658542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.233 [2024-12-12 10:08:56.658594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.658608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.233 [2024-12-12 10:08:56.658665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:7474020a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.658679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.233 #10 NEW cov: 12429 ft: 14360 corp: 7/127b lim: 30 exec/s: 0 rss: 73Mb L: 24/26 MS: 1 InsertByte- 00:06:43.233 [2024-12-12 10:08:56.718122] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.233 [2024-12-12 10:08:56.718240] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.233 [2024-12-12 10:08:56.718350] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119264) > buf size (4096) 00:06:43.233 [2024-12-12 10:08:56.718447] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x740a 00:06:43.233 [2024-12-12 10:08:56.718660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.718685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.233 [2024-12-12 10:08:56.718735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.718749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.233 [2024-12-12 10:08:56.718797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74770074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.718811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.233 [2024-12-12 10:08:56.718862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.718875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.233 #11 NEW cov: 12429 ft: 14468 corp: 8/153b lim: 30 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ShuffleBytes- 00:06:43.233 [2024-12-12 10:08:56.778177] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004a4a 00:06:43.233 [2024-12-12 10:08:56.778383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a834a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.778407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.233 #12 NEW cov: 12429 ft: 14511 corp: 9/161b lim: 30 exec/s: 0 rss: 73Mb L: 8/26 MS: 1 InsertByte- 00:06:43.233 [2024-12-12 10:08:56.838303] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004a4a 00:06:43.233 [2024-12-12 10:08:56.838507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a834a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.233 [2024-12-12 10:08:56.838532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.493 #13 NEW cov: 12429 ft: 14561 corp: 10/169b lim: 30 exec/s: 0 rss: 74Mb L: 8/26 MS: 1 ChangeBinInt- 00:06:43.493 [2024-12-12 10:08:56.898483] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004a4a 00:06:43.493 [2024-12-12 10:08:56.898689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a834a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.898718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.493 #14 NEW cov: 12429 ft: 14612 corp: 11/177b lim: 30 exec/s: 0 rss: 74Mb L: 8/26 MS: 1 CopyPart- 00:06:43.493 [2024-12-12 10:08:56.938651] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:56.938774] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:56.938877] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:56.938980] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:56.939182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:7474000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.939208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:56.939264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.939277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:56.939331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740077 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.939344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:56.939396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.939409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.493 #15 NEW cov: 12429 ft: 14625 corp: 12/204b lim: 30 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 CopyPart- 00:06:43.493 [2024-12-12 10:08:56.998825] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:56.998936] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:56.999057] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:56.999256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.999281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:56.999338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740054 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.999352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:56.999404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:56.999417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.493 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:43.493 #16 NEW cov: 12452 ft: 14655 corp: 13/227b lim: 30 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 ChangeBit- 00:06:43.493 [2024-12-12 10:08:57.038871] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:57.038984] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:57.039187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:57.039212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:57.039272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740034 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:57.039286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.493 #17 NEW cov: 12452 ft: 14978 corp: 14/243b lim: 30 exec/s: 0 rss: 74Mb L: 16/27 MS: 1 CrossOver- 00:06:43.493 [2024-12-12 10:08:57.079004] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:57.079116] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:57.079221] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:57.079426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740075 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:57.079452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:57.079507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:57.079522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:57.079578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:57.079592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.493 #18 NEW cov: 12452 ft: 14987 corp: 15/266b lim: 30 exec/s: 0 rss: 74Mb L: 23/27 MS: 1 ChangeBit- 00:06:43.493 [2024-12-12 10:08:57.119100] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:57.119212] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.493 [2024-12-12 10:08:57.119424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:57.119449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.493 [2024-12-12 10:08:57.119506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.493 [2024-12-12 10:08:57.119520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.752 #19 NEW cov: 12452 ft: 15005 corp: 16/282b lim: 30 exec/s: 19 rss: 74Mb L: 16/27 MS: 1 EraseBytes- 00:06:43.752 [2024-12-12 10:08:57.179249] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:43.752 [2024-12-12 10:08:57.179464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.752 [2024-12-12 10:08:57.179489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.752 #20 NEW cov: 12452 ft: 15032 corp: 17/289b lim: 30 exec/s: 20 rss: 74Mb L: 7/27 MS: 1 EraseBytes- 00:06:43.752 [2024-12-12 10:08:57.219384] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:43.752 [2024-12-12 10:08:57.219497] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300008b8b 00:06:43.752 [2024-12-12 10:08:57.219695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.752 [2024-12-12 10:08:57.219725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.752 [2024-12-12 10:08:57.219787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7474838c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.752 [2024-12-12 10:08:57.219800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.752 #21 NEW cov: 12452 ft: 15057 corp: 18/305b lim: 30 exec/s: 21 rss: 74Mb L: 16/27 MS: 1 ChangeBinInt- 00:06:43.752 [2024-12-12 10:08:57.279518] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:43.752 [2024-12-12 10:08:57.279750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a024a cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.752 [2024-12-12 10:08:57.279774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.752 #22 NEW cov: 12452 ft: 15103 corp: 19/312b lim: 30 exec/s: 22 rss: 74Mb L: 7/27 MS: 1 ChangeByte- 00:06:43.752 [2024-12-12 10:08:57.319599] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a51 00:06:43.752 [2024-12-12 10:08:57.319813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.752 [2024-12-12 10:08:57.319837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.752 #23 NEW cov: 12452 ft: 15129 corp: 20/319b lim: 30 exec/s: 23 rss: 74Mb L: 7/27 MS: 1 ChangeBinInt- 00:06:43.752 [2024-12-12 10:08:57.379799] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004a4a 00:06:43.752 [2024-12-12 10:08:57.380004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a834a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.752 [2024-12-12 10:08:57.380027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.011 #24 NEW cov: 12452 ft: 15219 corp: 21/326b lim: 30 exec/s: 24 rss: 74Mb L: 7/27 MS: 1 EraseBytes- 00:06:44.011 [2024-12-12 10:08:57.439991] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.440105] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300008b8b 00:06:44.011 [2024-12-12 10:08:57.440308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.440332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.011 [2024-12-12 10:08:57.440388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7474838c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.440401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.011 #25 NEW cov: 12452 ft: 15223 corp: 22/342b lim: 30 exec/s: 25 rss: 74Mb L: 16/27 MS: 1 ChangeBit- 00:06:44.011 [2024-12-12 10:08:57.500214] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.500327] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.500430] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.500642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740077 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.500667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.011 [2024-12-12 10:08:57.500725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.500742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.011 [2024-12-12 10:08:57.500795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.500808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.011 #26 NEW cov: 12452 ft: 15229 corp: 23/365b lim: 30 exec/s: 26 rss: 74Mb L: 23/27 MS: 1 CopyPart- 00:06:44.011 [2024-12-12 10:08:57.540322] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (381396) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.540435] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.540556] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.540765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74748177 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.540792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.011 [2024-12-12 10:08:57.540851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.540866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.011 [2024-12-12 10:08:57.540922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.540936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.011 #27 NEW cov: 12452 ft: 15250 corp: 24/388b lim: 30 exec/s: 27 rss: 74Mb L: 23/27 MS: 1 ChangeByte- 00:06:44.011 [2024-12-12 10:08:57.600483] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.011 [2024-12-12 10:08:57.600601] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300008b8b 00:06:44.011 [2024-12-12 10:08:57.600821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.600846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.011 [2024-12-12 10:08:57.600919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7474838c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.011 [2024-12-12 10:08:57.600933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.011 #28 NEW cov: 12452 ft: 15252 corp: 25/404b lim: 30 exec/s: 28 rss: 75Mb L: 16/27 MS: 1 ChangeByte- 00:06:44.269 [2024-12-12 10:08:57.660614] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ff4a 00:06:44.269 [2024-12-12 10:08:57.660824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a814a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.269 [2024-12-12 10:08:57.660857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.269 #29 NEW cov: 12452 ft: 15259 corp: 26/412b lim: 30 exec/s: 29 rss: 75Mb L: 8/27 MS: 1 InsertByte- 00:06:44.269 [2024-12-12 10:08:57.720814] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200004a4a 00:06:44.269 [2024-12-12 10:08:57.721023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.269 [2024-12-12 10:08:57.721046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.269 #30 NEW cov: 12452 ft: 15280 corp: 27/420b lim: 30 exec/s: 30 rss: 75Mb L: 8/27 MS: 1 ShuffleBytes- 00:06:44.269 [2024-12-12 10:08:57.760871] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300004a4a 00:06:44.269 [2024-12-12 10:08:57.761078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a834a cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.269 [2024-12-12 10:08:57.761102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.270 #31 NEW cov: 12452 ft: 15329 corp: 28/427b lim: 30 exec/s: 31 rss: 75Mb L: 7/27 MS: 1 CrossOver- 00:06:44.270 [2024-12-12 10:08:57.801024] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.270 [2024-12-12 10:08:57.801138] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300008b8b 00:06:44.270 [2024-12-12 10:08:57.801353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.801378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.270 [2024-12-12 10:08:57.801436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7474838c cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.801450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.270 #32 NEW cov: 12452 ft: 15354 corp: 29/444b lim: 30 exec/s: 32 rss: 75Mb L: 17/27 MS: 1 InsertByte- 00:06:44.270 [2024-12-12 10:08:57.861229] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.270 [2024-12-12 10:08:57.861342] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.270 [2024-12-12 10:08:57.861445] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.270 [2024-12-12 10:08:57.861656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740077 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.861680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.270 [2024-12-12 10:08:57.861737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.861751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.270 [2024-12-12 10:08:57.861808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.861821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.270 #33 NEW cov: 12452 ft: 15378 corp: 30/467b lim: 30 exec/s: 33 rss: 75Mb L: 23/27 MS: 1 ShuffleBytes- 00:06:44.270 [2024-12-12 10:08:57.901311] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.270 [2024-12-12 10:08:57.901430] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (172500) > buf size (4096) 00:06:44.270 [2024-12-12 10:08:57.901534] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f5f5 00:06:44.270 [2024-12-12 10:08:57.901749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.901773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.270 [2024-12-12 10:08:57.901828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:a8740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.901845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.270 [2024-12-12 10:08:57.901898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c8b838b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.270 [2024-12-12 10:08:57.901912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.529 #34 NEW cov: 12452 ft: 15384 corp: 31/487b lim: 30 exec/s: 34 rss: 75Mb L: 20/27 MS: 1 InsertRepeatedBytes- 00:06:44.529 [2024-12-12 10:08:57.941393] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ff4a 00:06:44.529 [2024-12-12 10:08:57.941609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a814a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.529 [2024-12-12 10:08:57.941634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.529 #35 NEW cov: 12452 ft: 15397 corp: 32/495b lim: 30 exec/s: 35 rss: 75Mb L: 8/27 MS: 1 CopyPart- 00:06:44.529 [2024-12-12 10:08:58.001655] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:06:44.529 [2024-12-12 10:08:58.001780] ctrlr.c:2670:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (115156) > buf size (4096) 00:06:44.529 [2024-12-12 10:08:58.001889] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f5f5 00:06:44.529 [2024-12-12 10:08:58.002095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.529 [2024-12-12 10:08:58.002120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.529 [2024-12-12 10:08:58.002174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:70740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.529 [2024-12-12 10:08:58.002187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.529 [2024-12-12 10:08:58.002238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:8c8b838b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.529 [2024-12-12 10:08:58.002252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.529 #36 NEW cov: 12452 ft: 15406 corp: 33/515b lim: 30 exec/s: 36 rss: 75Mb L: 20/27 MS: 1 ChangeByte- 00:06:44.529 [2024-12-12 10:08:58.061745] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ff4a 00:06:44.529 [2024-12-12 10:08:58.061960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a4a814a cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.529 [2024-12-12 10:08:58.061984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.529 #37 NEW cov: 12452 ft: 15425 corp: 34/523b lim: 30 exec/s: 37 rss: 75Mb L: 8/27 MS: 1 ChangeBinInt- 00:06:44.529 [2024-12-12 10:08:58.101832] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ff4a 00:06:44.529 [2024-12-12 10:08:58.102038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0ab381b5 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:44.529 [2024-12-12 10:08:58.102062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.529 #38 NEW cov: 12452 ft: 15471 corp: 35/531b lim: 30 exec/s: 19 rss: 75Mb L: 8/27 MS: 1 ChangeBinInt- 00:06:44.529 #38 DONE cov: 12452 ft: 15471 corp: 35/531b lim: 30 exec/s: 19 rss: 75Mb 00:06:44.529 Done 38 runs in 2 second(s) 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.798 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.799 10:08:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:44.799 [2024-12-12 10:08:58.293978] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:44.799 [2024-12-12 10:08:58.294053] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469395 ] 00:06:45.061 [2024-12-12 10:08:58.580348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.061 [2024-12-12 10:08:58.635343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.061 [2024-12-12 10:08:58.694475] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.320 [2024-12-12 10:08:58.710793] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:45.320 INFO: Running with entropic power schedule (0xFF, 100). 00:06:45.320 INFO: Seed: 2334311042 00:06:45.320 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:45.320 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:45.320 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:45.320 INFO: A corpus is not provided, starting from an empty corpus 00:06:45.320 #2 INITED exec/s: 0 rss: 66Mb 00:06:45.320 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:45.320 This may also happen if the target rejected all inputs we tried so far 00:06:45.320 [2024-12-12 10:08:58.769271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.320 [2024-12-12 10:08:58.769301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.579 NEW_FUNC[1/716]: 0x43ef98 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:45.579 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:45.579 #11 NEW cov: 12158 ft: 12143 corp: 2/11b lim: 35 exec/s: 0 rss: 73Mb L: 10/10 MS: 4 ShuffleBytes-InsertByte-CrossOver-InsertRepeatedBytes- 00:06:45.579 [2024-12-12 10:08:59.100532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2c0a003f cdw11:3b000a3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.579 [2024-12-12 10:08:59.100607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.579 #16 NEW cov: 12271 ft: 12980 corp: 3/21b lim: 35 exec/s: 0 rss: 74Mb L: 10/10 MS: 5 ChangeBit-ChangeBit-ChangeBit-InsertByte-CrossOver- 00:06:45.579 [2024-12-12 10:08:59.150237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.580 [2024-12-12 10:08:59.150263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.580 #17 NEW cov: 12277 ft: 13192 corp: 4/28b lim: 35 exec/s: 0 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:06:45.580 [2024-12-12 10:08:59.190624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.580 [2024-12-12 10:08:59.190649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.580 [2024-12-12 10:08:59.190705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.580 [2024-12-12 10:08:59.190725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.580 [2024-12-12 10:08:59.190784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.580 [2024-12-12 10:08:59.190797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.839 #18 NEW cov: 12362 ft: 13816 corp: 5/51b lim: 35 exec/s: 0 rss: 74Mb L: 23/23 MS: 1 InsertRepeatedBytes- 00:06:45.839 [2024-12-12 10:08:59.250933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.839 [2024-12-12 10:08:59.250958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.839 [2024-12-12 10:08:59.251015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.839 [2024-12-12 10:08:59.251029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.839 [2024-12-12 10:08:59.251087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.839 [2024-12-12 10:08:59.251100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.839 [2024-12-12 10:08:59.251156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.839 [2024-12-12 10:08:59.251169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.839 #19 NEW cov: 12362 ft: 14404 corp: 6/85b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:06:45.839 [2024-12-12 10:08:59.310941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.839 [2024-12-12 10:08:59.310967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.311046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.311062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.311118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.311132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.840 #20 NEW cov: 12362 ft: 14439 corp: 7/109b lim: 35 exec/s: 0 rss: 74Mb L: 24/34 MS: 1 InsertByte- 00:06:45.840 [2024-12-12 10:08:59.371259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.371284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.371356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.371370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.371427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.371441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.371498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.371511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:45.840 #21 NEW cov: 12362 ft: 14504 corp: 8/143b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeBit- 00:06:45.840 [2024-12-12 10:08:59.431302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.431326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.431383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.431396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.431450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.431464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.840 #22 NEW cov: 12362 ft: 14580 corp: 9/166b lim: 35 exec/s: 0 rss: 74Mb L: 23/34 MS: 1 ShuffleBytes- 00:06:45.840 [2024-12-12 10:08:59.471558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.471583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.471638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.471651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.471711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.471745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.840 [2024-12-12 10:08:59.471802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.840 [2024-12-12 10:08:59.471815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.099 #23 NEW cov: 12362 ft: 14618 corp: 10/200b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ShuffleBytes- 00:06:46.099 [2024-12-12 10:08:59.531788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.099 [2024-12-12 10:08:59.531812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.099 [2024-12-12 10:08:59.531885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.099 [2024-12-12 10:08:59.531899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.099 [2024-12-12 10:08:59.532011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.099 [2024-12-12 10:08:59.532026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.099 #24 NEW cov: 12362 ft: 15012 corp: 11/234b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 ChangeBinInt- 00:06:46.099 [2024-12-12 10:08:59.592041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.099 [2024-12-12 10:08:59.592065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.099 [2024-12-12 10:08:59.592123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.592137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.100 [2024-12-12 10:08:59.592195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff3b00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.592209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.100 [2024-12-12 10:08:59.592265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.592278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.100 [2024-12-12 10:08:59.592335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:3b00ff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.592349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.100 #30 NEW cov: 12362 ft: 15045 corp: 12/269b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:06:46.100 [2024-12-12 10:08:59.631733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.631757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.100 [2024-12-12 10:08:59.631819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.631833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.100 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:46.100 #31 NEW cov: 12385 ft: 15264 corp: 13/286b lim: 35 exec/s: 0 rss: 74Mb L: 17/35 MS: 1 EraseBytes- 00:06:46.100 [2024-12-12 10:08:59.671729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:640a003f cdw11:3b000a3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.671754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.100 #32 NEW cov: 12385 ft: 15300 corp: 14/296b lim: 35 exec/s: 0 rss: 75Mb L: 10/35 MS: 1 ChangeByte- 00:06:46.100 [2024-12-12 10:08:59.732167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.732192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.100 [2024-12-12 10:08:59.732264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.732278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.100 [2024-12-12 10:08:59.732334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.100 [2024-12-12 10:08:59.732349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.359 #33 NEW cov: 12385 ft: 15320 corp: 15/319b lim: 35 exec/s: 33 rss: 75Mb L: 23/35 MS: 1 ShuffleBytes- 00:06:46.359 [2024-12-12 10:08:59.792170] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.359 [2024-12-12 10:08:59.792555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.792580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.792638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:050000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.792652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.792709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.792730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.792784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.792798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.359 #34 NEW cov: 12396 ft: 15405 corp: 16/353b lim: 35 exec/s: 34 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:46.359 [2024-12-12 10:08:59.832330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9e9e009e cdw11:9e009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.832355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.832414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9e9e009e cdw11:8a009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.832430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.359 #39 NEW cov: 12396 ft: 15411 corp: 17/367b lim: 35 exec/s: 39 rss: 75Mb L: 14/35 MS: 5 CopyPart-ChangeBit-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:06:46.359 [2024-12-12 10:08:59.872415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9e9e009e cdw11:9e009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.872440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.872501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9e9e009e cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.872515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.359 #40 NEW cov: 12396 ft: 15426 corp: 18/386b lim: 35 exec/s: 40 rss: 75Mb L: 19/35 MS: 1 InsertRepeatedBytes- 00:06:46.359 [2024-12-12 10:08:59.932994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.933019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.933078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff3f00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.933092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.933166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.933180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.933235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.933249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.933305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:fffd00ff cdw11:3b00ff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.933318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.359 #41 NEW cov: 12396 ft: 15441 corp: 19/421b lim: 35 exec/s: 41 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:06:46.359 [2024-12-12 10:08:59.972654] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.359 [2024-12-12 10:08:59.973007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3a000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.973033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.973092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:050000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.973106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.973173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.359 [2024-12-12 10:08:59.973189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.359 [2024-12-12 10:08:59.973250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.360 [2024-12-12 10:08:59.973263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.619 #42 NEW cov: 12396 ft: 15464 corp: 20/455b lim: 35 exec/s: 42 rss: 75Mb L: 34/35 MS: 1 ChangeBit- 00:06:46.619 [2024-12-12 10:09:00.032881] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.619 [2024-12-12 10:09:00.033222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3a002a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.033249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.033310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:050000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.033324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.033384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.033400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.033458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.033472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.619 #44 NEW cov: 12396 ft: 15492 corp: 21/487b lim: 35 exec/s: 44 rss: 75Mb L: 32/35 MS: 2 ChangeByte-CrossOver- 00:06:46.619 [2024-12-12 10:09:00.073218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.073243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.073302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.073316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.073375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.073389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.073446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.073460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.619 #45 NEW cov: 12396 ft: 15510 corp: 22/521b lim: 35 exec/s: 45 rss: 75Mb L: 34/35 MS: 1 ShuffleBytes- 00:06:46.619 [2024-12-12 10:09:00.113473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.113499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.113559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.113577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.113636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.113650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.113709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.113728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.113788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:fffd00ff cdw11:3b00ff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.113802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.619 #46 NEW cov: 12396 ft: 15540 corp: 23/556b lim: 35 exec/s: 46 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:06:46.619 [2024-12-12 10:09:00.173625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.173651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.173712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.173731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.173788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.173801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.173861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.173874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.173932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:fdff00ff cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.173946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.619 #47 NEW cov: 12396 ft: 15605 corp: 24/591b lim: 35 exec/s: 47 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:06:46.619 [2024-12-12 10:09:00.213180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:efc4000a cdw11:c400c4c4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.213206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.619 #48 NEW cov: 12396 ft: 15668 corp: 25/601b lim: 35 exec/s: 48 rss: 75Mb L: 10/35 MS: 1 ChangeBinInt- 00:06:46.619 [2024-12-12 10:09:00.253474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9e9e009e cdw11:9e009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.253500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.619 [2024-12-12 10:09:00.253558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9e9e009e cdw11:8a009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.619 [2024-12-12 10:09:00.253576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.878 #49 NEW cov: 12396 ft: 15674 corp: 26/615b lim: 35 exec/s: 49 rss: 75Mb L: 14/35 MS: 1 CopyPart- 00:06:46.878 [2024-12-12 10:09:00.293873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.293899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.293958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.293973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.294093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.294107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.878 #50 NEW cov: 12396 ft: 15731 corp: 27/649b lim: 35 exec/s: 50 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:06:46.878 [2024-12-12 10:09:00.353995] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:46.878 [2024-12-12 10:09:00.354233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.354257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.354315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.354329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.354384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff3b00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.354398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.354455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.354469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.354528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:fcff0000 cdw11:3b00ff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.354544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.878 #51 NEW cov: 12396 ft: 15734 corp: 28/684b lim: 35 exec/s: 51 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:46.878 [2024-12-12 10:09:00.414177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.414202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.414263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.414276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.414335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.414351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.878 [2024-12-12 10:09:00.414409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.878 [2024-12-12 10:09:00.414423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.878 #52 NEW cov: 12396 ft: 15742 corp: 29/718b lim: 35 exec/s: 52 rss: 75Mb L: 34/35 MS: 1 CopyPart- 00:06:46.879 [2024-12-12 10:09:00.454172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.879 [2024-12-12 10:09:00.454197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.879 [2024-12-12 10:09:00.454254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.879 [2024-12-12 10:09:00.454267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.879 [2024-12-12 10:09:00.454323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:85850085 cdw11:3b00853b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.879 [2024-12-12 10:09:00.454336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.879 #53 NEW cov: 12396 ft: 15756 corp: 30/741b lim: 35 exec/s: 53 rss: 75Mb L: 23/35 MS: 1 CrossOver- 00:06:46.879 [2024-12-12 10:09:00.494267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9e9e009e cdw11:9e009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.879 [2024-12-12 10:09:00.494292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.879 [2024-12-12 10:09:00.494349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9e9e009e cdw11:26009e26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.879 [2024-12-12 10:09:00.494363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.879 [2024-12-12 10:09:00.494420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:26260026 cdw11:8a00269e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.879 [2024-12-12 10:09:00.494434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.138 #54 NEW cov: 12396 ft: 15794 corp: 31/762b lim: 35 exec/s: 54 rss: 75Mb L: 21/35 MS: 1 InsertRepeatedBytes- 00:06:47.138 [2024-12-12 10:09:00.554620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3b000a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.554645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.554704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.554722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.554780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:9600ff96 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.554794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.554848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:96ff0096 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.554864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.138 #55 NEW cov: 12396 ft: 15815 corp: 32/796b lim: 35 exec/s: 55 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:06:47.138 [2024-12-12 10:09:00.594448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.594474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.594533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.594547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.138 #56 NEW cov: 12396 ft: 15899 corp: 33/812b lim: 35 exec/s: 56 rss: 75Mb L: 16/35 MS: 1 EraseBytes- 00:06:47.138 [2024-12-12 10:09:00.634680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960096 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.634704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.634765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:85ff0085 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.634779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.634837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.634850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.138 #57 NEW cov: 12396 ft: 15975 corp: 34/839b lim: 35 exec/s: 57 rss: 75Mb L: 27/35 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:47.138 [2024-12-12 10:09:00.694865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:9e9e009e cdw11:9e009e9e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.694890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.694948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:9e9e009e cdw11:26005b26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.694962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.695019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:26260026 cdw11:8a00269e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.695034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.138 #58 NEW cov: 12396 ft: 16007 corp: 35/860b lim: 35 exec/s: 58 rss: 76Mb L: 21/35 MS: 1 ChangeByte- 00:06:47.138 [2024-12-12 10:09:00.755047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:96960032 cdw11:96009696 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.755072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.755130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:8585000a cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.755144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.138 [2024-12-12 10:09:00.755199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:85850085 cdw11:85008585 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.138 [2024-12-12 10:09:00.755217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.398 #59 NEW cov: 12396 ft: 16050 corp: 36/884b lim: 35 exec/s: 29 rss: 76Mb L: 24/35 MS: 1 InsertByte- 00:06:47.398 #59 DONE cov: 12396 ft: 16050 corp: 36/884b lim: 35 exec/s: 29 rss: 76Mb 00:06:47.398 ###### Recommended dictionary. ###### 00:06:47.398 "\377\377\377\377" # Uses: 0 00:06:47.398 ###### End of recommended dictionary. ###### 00:06:47.398 Done 59 runs in 2 second(s) 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:47.398 10:09:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:47.398 [2024-12-12 10:09:00.931106] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:47.398 [2024-12-12 10:09:00.931196] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469969 ] 00:06:47.657 [2024-12-12 10:09:01.212468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.657 [2024-12-12 10:09:01.273987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.915 [2024-12-12 10:09:01.333620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.915 [2024-12-12 10:09:01.349944] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:47.915 INFO: Running with entropic power schedule (0xFF, 100). 00:06:47.915 INFO: Seed: 681373342 00:06:47.915 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:47.915 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:47.915 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:47.915 INFO: A corpus is not provided, starting from an empty corpus 00:06:47.915 #2 INITED exec/s: 0 rss: 65Mb 00:06:47.915 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:47.915 This may also happen if the target rejected all inputs we tried so far 00:06:48.174 NEW_FUNC[1/705]: 0x440c78 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:48.174 NEW_FUNC[2/705]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:48.174 #8 NEW cov: 12060 ft: 12061 corp: 2/21b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:06:48.174 #17 NEW cov: 12191 ft: 12652 corp: 3/38b lim: 20 exec/s: 0 rss: 73Mb L: 17/20 MS: 4 ChangeByte-CrossOver-ChangeASCIIInt-CrossOver- 00:06:48.432 #23 NEW cov: 12197 ft: 12870 corp: 4/57b lim: 20 exec/s: 0 rss: 73Mb L: 19/20 MS: 1 EraseBytes- 00:06:48.432 #27 NEW cov: 12282 ft: 13160 corp: 5/75b lim: 20 exec/s: 0 rss: 73Mb L: 18/20 MS: 4 ChangeBit-ShuffleBytes-ChangeBit-CrossOver- 00:06:48.432 #28 NEW cov: 12282 ft: 13238 corp: 6/95b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:06:48.432 #29 NEW cov: 12282 ft: 13292 corp: 7/113b lim: 20 exec/s: 0 rss: 73Mb L: 18/20 MS: 1 ShuffleBytes- 00:06:48.433 #30 NEW cov: 12282 ft: 13344 corp: 8/132b lim: 20 exec/s: 0 rss: 73Mb L: 19/20 MS: 1 ShuffleBytes- 00:06:48.690 #31 NEW cov: 12282 ft: 13405 corp: 9/150b lim: 20 exec/s: 0 rss: 73Mb L: 18/20 MS: 1 ChangeByte- 00:06:48.690 #32 NEW cov: 12282 ft: 13435 corp: 10/170b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 CrossOver- 00:06:48.690 #33 NEW cov: 12282 ft: 13517 corp: 11/187b lim: 20 exec/s: 0 rss: 73Mb L: 17/20 MS: 1 ShuffleBytes- 00:06:48.690 #34 NEW cov: 12290 ft: 13714 corp: 12/202b lim: 20 exec/s: 0 rss: 73Mb L: 15/20 MS: 1 EraseBytes- 00:06:48.690 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:48.690 #35 NEW cov: 12313 ft: 13824 corp: 13/221b lim: 20 exec/s: 0 rss: 73Mb L: 19/20 MS: 1 ShuffleBytes- 00:06:48.949 #36 NEW cov: 12313 ft: 13846 corp: 14/239b lim: 20 exec/s: 0 rss: 74Mb L: 18/20 MS: 1 InsertByte- 00:06:48.949 [2024-12-12 10:09:02.378406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:48.949 [2024-12-12 10:09:02.378444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.949 NEW_FUNC[1/19]: 0x137c5c8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3485 00:06:48.949 NEW_FUNC[2/19]: 0x137d148 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3427 00:06:48.949 #37 NEW cov: 12615 ft: 14187 corp: 15/258b lim: 20 exec/s: 37 rss: 74Mb L: 19/20 MS: 1 ChangeBinInt- 00:06:48.949 #38 NEW cov: 12615 ft: 14234 corp: 16/276b lim: 20 exec/s: 38 rss: 74Mb L: 18/20 MS: 1 ChangeASCIIInt- 00:06:48.949 #39 NEW cov: 12615 ft: 14272 corp: 17/294b lim: 20 exec/s: 39 rss: 74Mb L: 18/20 MS: 1 CrossOver- 00:06:48.949 #40 NEW cov: 12615 ft: 14327 corp: 18/312b lim: 20 exec/s: 40 rss: 74Mb L: 18/20 MS: 1 ChangeASCIIInt- 00:06:49.208 #41 NEW cov: 12615 ft: 14688 corp: 19/318b lim: 20 exec/s: 41 rss: 74Mb L: 6/20 MS: 1 CrossOver- 00:06:49.208 #42 NEW cov: 12616 ft: 14949 corp: 20/329b lim: 20 exec/s: 42 rss: 74Mb L: 11/20 MS: 1 EraseBytes- 00:06:49.208 #43 NEW cov: 12616 ft: 14970 corp: 21/347b lim: 20 exec/s: 43 rss: 74Mb L: 18/20 MS: 1 CopyPart- 00:06:49.208 #44 NEW cov: 12616 ft: 14988 corp: 22/353b lim: 20 exec/s: 44 rss: 74Mb L: 6/20 MS: 1 CopyPart- 00:06:49.208 #45 NEW cov: 12616 ft: 15000 corp: 23/372b lim: 20 exec/s: 45 rss: 74Mb L: 19/20 MS: 1 InsertByte- 00:06:49.467 #46 NEW cov: 12616 ft: 15020 corp: 24/391b lim: 20 exec/s: 46 rss: 74Mb L: 19/20 MS: 1 InsertByte- 00:06:49.467 #47 NEW cov: 12616 ft: 15100 corp: 25/411b lim: 20 exec/s: 47 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:06:49.467 #48 NEW cov: 12616 ft: 15124 corp: 26/430b lim: 20 exec/s: 48 rss: 74Mb L: 19/20 MS: 1 InsertRepeatedBytes- 00:06:49.467 #49 NEW cov: 12616 ft: 15140 corp: 27/450b lim: 20 exec/s: 49 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:06:49.467 #50 NEW cov: 12616 ft: 15143 corp: 28/469b lim: 20 exec/s: 50 rss: 74Mb L: 19/20 MS: 1 ShuffleBytes- 00:06:49.726 #51 NEW cov: 12616 ft: 15158 corp: 29/489b lim: 20 exec/s: 51 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:06:49.726 #52 NEW cov: 12616 ft: 15172 corp: 30/509b lim: 20 exec/s: 52 rss: 75Mb L: 20/20 MS: 1 InsertByte- 00:06:49.726 #53 NEW cov: 12616 ft: 15183 corp: 31/527b lim: 20 exec/s: 53 rss: 75Mb L: 18/20 MS: 1 ChangeByte- 00:06:49.726 #54 NEW cov: 12616 ft: 15187 corp: 32/546b lim: 20 exec/s: 54 rss: 75Mb L: 19/20 MS: 1 ChangeBinInt- 00:06:49.726 #55 NEW cov: 12616 ft: 15192 corp: 33/565b lim: 20 exec/s: 55 rss: 75Mb L: 19/20 MS: 1 ChangeByte- 00:06:49.986 #56 NEW cov: 12616 ft: 15204 corp: 34/584b lim: 20 exec/s: 56 rss: 75Mb L: 19/20 MS: 1 CopyPart- 00:06:49.986 #57 NEW cov: 12616 ft: 15214 corp: 35/602b lim: 20 exec/s: 28 rss: 75Mb L: 18/20 MS: 1 ShuffleBytes- 00:06:49.986 #57 DONE cov: 12616 ft: 15214 corp: 35/602b lim: 20 exec/s: 28 rss: 75Mb 00:06:49.986 Done 57 runs in 2 second(s) 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:49.986 10:09:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:49.986 [2024-12-12 10:09:03.573972] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:49.986 [2024-12-12 10:09:03.574057] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470572 ] 00:06:50.245 [2024-12-12 10:09:03.863477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.504 [2024-12-12 10:09:03.928498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.504 [2024-12-12 10:09:03.987840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.504 [2024-12-12 10:09:04.004171] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:50.504 INFO: Running with entropic power schedule (0xFF, 100). 00:06:50.504 INFO: Seed: 3334360640 00:06:50.504 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:50.504 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:50.504 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:50.504 INFO: A corpus is not provided, starting from an empty corpus 00:06:50.504 #2 INITED exec/s: 0 rss: 66Mb 00:06:50.504 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:50.504 This may also happen if the target rejected all inputs we tried so far 00:06:50.504 [2024-12-12 10:09:04.059953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.504 [2024-12-12 10:09:04.059983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.504 [2024-12-12 10:09:04.060042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.504 [2024-12-12 10:09:04.060057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.504 [2024-12-12 10:09:04.060112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.504 [2024-12-12 10:09:04.060126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:50.763 NEW_FUNC[1/717]: 0x441d78 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:50.763 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:50.763 #5 NEW cov: 12178 ft: 12177 corp: 2/22b lim: 35 exec/s: 0 rss: 73Mb L: 21/21 MS: 3 CMP-CopyPart-InsertRepeatedBytes- DE: "\001\000\001\021"- 00:06:50.763 [2024-12-12 10:09:04.390829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001ff01 cdw11:11ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.763 [2024-12-12 10:09:04.390913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.022 #7 NEW cov: 12293 ft: 13580 corp: 3/31b lim: 35 exec/s: 0 rss: 73Mb L: 9/21 MS: 2 InsertRepeatedBytes-PersAutoDict- DE: "\001\000\001\021"- 00:06:51.022 [2024-12-12 10:09:04.440999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.441026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.441100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.441115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.441172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.441185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.441244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.441257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.022 #8 NEW cov: 12299 ft: 14052 corp: 4/64b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:06:51.022 [2024-12-12 10:09:04.501190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.501219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.501280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.501294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.501351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.501365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.501419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.501433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.022 #9 NEW cov: 12384 ft: 14321 corp: 5/97b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:06:51.022 [2024-12-12 10:09:04.561300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.561326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.561387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.561401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.561459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff32ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.561472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.561530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.561543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.022 #10 NEW cov: 12384 ft: 14503 corp: 6/131b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertByte- 00:06:51.022 [2024-12-12 10:09:04.600932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.600957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.022 #11 NEW cov: 12384 ft: 14622 corp: 7/139b lim: 35 exec/s: 0 rss: 73Mb L: 8/34 MS: 1 CrossOver- 00:06:51.022 [2024-12-12 10:09:04.641683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01030a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.022 [2024-12-12 10:09:04.641709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.022 [2024-12-12 10:09:04.641771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.023 [2024-12-12 10:09:04.641785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.023 [2024-12-12 10:09:04.641857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.023 [2024-12-12 10:09:04.641871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.023 [2024-12-12 10:09:04.641929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.023 [2024-12-12 10:09:04.641942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.023 [2024-12-12 10:09:04.641997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00010000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.023 [2024-12-12 10:09:04.642013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.282 #12 NEW cov: 12384 ft: 14871 corp: 8/174b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CMP- DE: "\001\003"- 00:06:51.282 [2024-12-12 10:09:04.681448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.681473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.681528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.681542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.681595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.681608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.282 #13 NEW cov: 12384 ft: 14937 corp: 9/201b lim: 35 exec/s: 0 rss: 73Mb L: 27/35 MS: 1 EraseBytes- 00:06:51.282 [2024-12-12 10:09:04.721214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001ffff cdw11:11ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.721239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.282 #14 NEW cov: 12384 ft: 14999 corp: 10/210b lim: 35 exec/s: 0 rss: 73Mb L: 9/35 MS: 1 ShuffleBytes- 00:06:51.282 [2024-12-12 10:09:04.781893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fbfb0afb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.781918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.781974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.781988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.782043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.782056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.782109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.782123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.282 #15 NEW cov: 12384 ft: 15038 corp: 11/243b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:51.282 [2024-12-12 10:09:04.821522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001ffff cdw11:11000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.821550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.282 #16 NEW cov: 12384 ft: 15072 corp: 12/253b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:06:51.282 [2024-12-12 10:09:04.882177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fbfb0afb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.882202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.882257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.882271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.882327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.882340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.282 [2024-12-12 10:09:04.882396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.282 [2024-12-12 10:09:04.882410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.541 #17 NEW cov: 12384 ft: 15109 corp: 13/286b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 ChangeByte- 00:06:51.541 [2024-12-12 10:09:04.942350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fbfb0afb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.541 [2024-12-12 10:09:04.942374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.541 [2024-12-12 10:09:04.942431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.541 [2024-12-12 10:09:04.942445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.541 [2024-12-12 10:09:04.942516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fbc7fbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.541 [2024-12-12 10:09:04.942530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.541 [2024-12-12 10:09:04.942583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:04.942596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.542 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:51.542 #18 NEW cov: 12407 ft: 15215 corp: 14/319b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 ChangeByte- 00:06:51.542 [2024-12-12 10:09:04.981957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0003ffff cdw11:11000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:04.981982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.542 #19 NEW cov: 12407 ft: 15263 corp: 15/329b lim: 35 exec/s: 0 rss: 74Mb L: 10/35 MS: 1 ChangeBit- 00:06:51.542 [2024-12-12 10:09:05.042129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001ff01 cdw11:11ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.042154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.542 #20 NEW cov: 12407 ft: 15269 corp: 16/339b lim: 35 exec/s: 20 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:06:51.542 [2024-12-12 10:09:05.082890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01030a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.082915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.082976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.082991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.083048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.083061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.083116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.083130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.083185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00010000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.083199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:51.542 #21 NEW cov: 12407 ft: 15326 corp: 17/374b lim: 35 exec/s: 21 rss: 74Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\003"- 00:06:51.542 [2024-12-12 10:09:05.122692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.122720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.122779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.122793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.122849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00010000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.122863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.542 #22 NEW cov: 12407 ft: 15331 corp: 18/395b lim: 35 exec/s: 22 rss: 74Mb L: 21/35 MS: 1 ShuffleBytes- 00:06:51.542 [2024-12-12 10:09:05.162956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fbfb1afb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.162981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.163054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.163068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.163125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.163139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.542 [2024-12-12 10:09:05.163199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.542 [2024-12-12 10:09:05.163213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.801 #23 NEW cov: 12407 ft: 15405 corp: 19/428b lim: 35 exec/s: 23 rss: 74Mb L: 33/35 MS: 1 ChangeBit- 00:06:51.801 [2024-12-12 10:09:05.202621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0003ffff cdw11:ff000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.801 [2024-12-12 10:09:05.202647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.801 #29 NEW cov: 12407 ft: 15426 corp: 20/438b lim: 35 exec/s: 29 rss: 74Mb L: 10/35 MS: 1 CopyPart- 00:06:51.801 [2024-12-12 10:09:05.263285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fbfb0afb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.801 [2024-12-12 10:09:05.263311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.801 [2024-12-12 10:09:05.263369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:fbfafbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.801 [2024-12-12 10:09:05.263383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.801 [2024-12-12 10:09:05.263439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.801 [2024-12-12 10:09:05.263453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.801 [2024-12-12 10:09:05.263508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:fbfbfbfb cdw11:fbfb0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.801 [2024-12-12 10:09:05.263521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.801 #30 NEW cov: 12407 ft: 15460 corp: 21/471b lim: 35 exec/s: 30 rss: 74Mb L: 33/35 MS: 1 ChangeBit- 00:06:51.801 [2024-12-12 10:09:05.322948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.801 [2024-12-12 10:09:05.322973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.801 #31 NEW cov: 12407 ft: 15473 corp: 22/479b lim: 35 exec/s: 31 rss: 74Mb L: 8/35 MS: 1 CopyPart- 00:06:51.801 [2024-12-12 10:09:05.383100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001ff01 cdw11:11ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:51.801 [2024-12-12 10:09:05.383126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.801 #32 NEW cov: 12407 ft: 15484 corp: 23/488b lim: 35 exec/s: 32 rss: 74Mb L: 9/35 MS: 1 EraseBytes- 00:06:52.060 [2024-12-12 10:09:05.443975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01230a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.060 [2024-12-12 10:09:05.443999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.060 [2024-12-12 10:09:05.444057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.060 [2024-12-12 10:09:05.444071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.060 [2024-12-12 10:09:05.444126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.060 [2024-12-12 10:09:05.444142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.060 [2024-12-12 10:09:05.444199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.060 [2024-12-12 10:09:05.444212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.060 [2024-12-12 10:09:05.444271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00010000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.060 [2024-12-12 10:09:05.444284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.060 #33 NEW cov: 12407 ft: 15499 corp: 24/523b lim: 35 exec/s: 33 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:52.060 [2024-12-12 10:09:05.503465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.060 [2024-12-12 10:09:05.503490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.061 #34 NEW cov: 12407 ft: 15534 corp: 25/534b lim: 35 exec/s: 34 rss: 74Mb L: 11/35 MS: 1 InsertRepeatedBytes- 00:06:52.061 [2024-12-12 10:09:05.543581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a400a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.061 [2024-12-12 10:09:05.543606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.061 #35 NEW cov: 12407 ft: 15544 corp: 26/542b lim: 35 exec/s: 35 rss: 74Mb L: 8/35 MS: 1 ChangeBit- 00:06:52.061 [2024-12-12 10:09:05.583654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.061 [2024-12-12 10:09:05.583679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.061 #36 NEW cov: 12407 ft: 15550 corp: 27/550b lim: 35 exec/s: 36 rss: 75Mb L: 8/35 MS: 1 CrossOver- 00:06:52.061 [2024-12-12 10:09:05.644476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.061 [2024-12-12 10:09:05.644501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.061 [2024-12-12 10:09:05.644559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.061 [2024-12-12 10:09:05.644573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.061 [2024-12-12 10:09:05.644628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff32ffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.061 [2024-12-12 10:09:05.644641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.061 [2024-12-12 10:09:05.644697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.061 [2024-12-12 10:09:05.644710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.061 [2024-12-12 10:09:05.644769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00010000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.061 [2024-12-12 10:09:05.644781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.061 #37 NEW cov: 12407 ft: 15568 corp: 28/585b lim: 35 exec/s: 37 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:06:52.320 [2024-12-12 10:09:05.704046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01030a00 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.704071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.320 #38 NEW cov: 12407 ft: 15586 corp: 29/595b lim: 35 exec/s: 38 rss: 75Mb L: 10/35 MS: 1 PersAutoDict- DE: "\001\003"- 00:06:52.320 [2024-12-12 10:09:05.764182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a400a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.764207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.320 #39 NEW cov: 12407 ft: 15591 corp: 30/604b lim: 35 exec/s: 39 rss: 75Mb L: 9/35 MS: 1 InsertByte- 00:06:52.320 [2024-12-12 10:09:05.824867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.824892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.824948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.824961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.825018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ff32ffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.825031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.825088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.825102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.320 #40 NEW cov: 12407 ft: 15592 corp: 31/638b lim: 35 exec/s: 40 rss: 75Mb L: 34/35 MS: 1 ShuffleBytes- 00:06:52.320 [2024-12-12 10:09:05.865190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01030a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.865214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.865272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.865286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.865346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.865359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.865416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.865429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.865487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00010000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.865503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.320 #41 NEW cov: 12407 ft: 15598 corp: 32/673b lim: 35 exec/s: 41 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:06:52.320 [2024-12-12 10:09:05.924843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:000a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.924869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.320 [2024-12-12 10:09:05.924926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000a40 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.320 [2024-12-12 10:09:05.924939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.580 #43 NEW cov: 12407 ft: 15809 corp: 33/688b lim: 35 exec/s: 43 rss: 75Mb L: 15/35 MS: 2 CrossOver-CrossOver- 00:06:52.580 [2024-12-12 10:09:05.985310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001ff01 cdw11:11ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.580 [2024-12-12 10:09:05.985336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.580 [2024-12-12 10:09:05.985395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:dddddddd cdw11:dddd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.580 [2024-12-12 10:09:05.985409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.580 [2024-12-12 10:09:05.985467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:dddddddd cdw11:dddd0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.580 [2024-12-12 10:09:05.985480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.580 [2024-12-12 10:09:05.985536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:dddddddd cdw11:ff280000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.580 [2024-12-12 10:09:05.985549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.580 #44 NEW cov: 12407 ft: 15816 corp: 34/716b lim: 35 exec/s: 44 rss: 75Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:06:52.580 [2024-12-12 10:09:06.045018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0001ffff cdw11:11ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:52.580 [2024-12-12 10:09:06.045043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.580 #45 NEW cov: 12407 ft: 15832 corp: 35/726b lim: 35 exec/s: 22 rss: 75Mb L: 10/35 MS: 1 InsertByte- 00:06:52.580 #45 DONE cov: 12407 ft: 15832 corp: 35/726b lim: 35 exec/s: 22 rss: 75Mb 00:06:52.580 ###### Recommended dictionary. ###### 00:06:52.580 "\001\000\001\021" # Uses: 1 00:06:52.580 "\001\003" # Uses: 2 00:06:52.580 ###### End of recommended dictionary. ###### 00:06:52.580 Done 45 runs in 2 second(s) 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:52.580 10:09:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:52.839 [2024-12-12 10:09:06.220986] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:52.839 [2024-12-12 10:09:06.221077] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471427 ] 00:06:52.839 [2024-12-12 10:09:06.421973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.840 [2024-12-12 10:09:06.453189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.099 [2024-12-12 10:09:06.512661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.099 [2024-12-12 10:09:06.529002] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:53.099 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.099 INFO: Seed: 1563393817 00:06:53.099 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:53.099 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:53.099 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:53.099 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.099 #2 INITED exec/s: 0 rss: 66Mb 00:06:53.099 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.099 This may also happen if the target rejected all inputs we tried so far 00:06:53.099 [2024-12-12 10:09:06.588640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.099 [2024-12-12 10:09:06.588668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.099 [2024-12-12 10:09:06.588724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.099 [2024-12-12 10:09:06.588738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.099 [2024-12-12 10:09:06.588775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.099 [2024-12-12 10:09:06.588789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.358 NEW_FUNC[1/717]: 0x443f18 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:53.358 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:53.358 #18 NEW cov: 12191 ft: 12189 corp: 2/29b lim: 45 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:06:53.358 [2024-12-12 10:09:06.919905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.358 [2024-12-12 10:09:06.919958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.358 [2024-12-12 10:09:06.920037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.358 [2024-12-12 10:09:06.920063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.358 [2024-12-12 10:09:06.920141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.358 [2024-12-12 10:09:06.920166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.358 #19 NEW cov: 12304 ft: 13027 corp: 3/61b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:53.358 [2024-12-12 10:09:06.969499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.358 [2024-12-12 10:09:06.969524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.358 [2024-12-12 10:09:06.969578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.358 [2024-12-12 10:09:06.969592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.358 [2024-12-12 10:09:06.969642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.358 [2024-12-12 10:09:06.969655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.618 #20 NEW cov: 12310 ft: 13242 corp: 4/89b lim: 45 exec/s: 0 rss: 73Mb L: 28/32 MS: 1 ChangeBinInt- 00:06:53.618 [2024-12-12 10:09:07.029517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.029540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.029593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.029606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.618 #21 NEW cov: 12395 ft: 13674 corp: 5/114b lim: 45 exec/s: 0 rss: 73Mb L: 25/32 MS: 1 CrossOver- 00:06:53.618 [2024-12-12 10:09:07.089801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.089825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.089894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.089908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.089961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfcfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.089980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.618 #22 NEW cov: 12395 ft: 13768 corp: 6/142b lim: 45 exec/s: 0 rss: 73Mb L: 28/32 MS: 1 ChangeBit- 00:06:53.618 [2024-12-12 10:09:07.149947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.149971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.150023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.150037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.150090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfcfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.150103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.618 #23 NEW cov: 12395 ft: 13865 corp: 7/171b lim: 45 exec/s: 0 rss: 73Mb L: 29/32 MS: 1 InsertByte- 00:06:53.618 [2024-12-12 10:09:07.210147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.210171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.210224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.210238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.210289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.210302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.618 #24 NEW cov: 12395 ft: 13972 corp: 8/203b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:06:53.618 [2024-12-12 10:09:07.250090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49492749 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.250114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.618 [2024-12-12 10:09:07.250167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.618 [2024-12-12 10:09:07.250180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.877 #25 NEW cov: 12395 ft: 14027 corp: 9/228b lim: 45 exec/s: 0 rss: 73Mb L: 25/32 MS: 1 ChangeByte- 00:06:53.877 [2024-12-12 10:09:07.310402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.310426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.310497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.310510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.310562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.310578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.877 #26 NEW cov: 12395 ft: 14059 corp: 10/260b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:06:53.877 [2024-12-12 10:09:07.370721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.370745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.370813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:7c490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.370827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.370877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.370890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.370940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.370953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.877 #27 NEW cov: 12395 ft: 14444 corp: 11/301b lim: 45 exec/s: 0 rss: 74Mb L: 41/41 MS: 1 CopyPart- 00:06:53.877 [2024-12-12 10:09:07.430903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.430927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.430980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.430993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.431043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.431056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.431106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:497c4949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.431119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:53.877 #28 NEW cov: 12395 ft: 14476 corp: 12/342b lim: 45 exec/s: 0 rss: 74Mb L: 41/41 MS: 1 CopyPart- 00:06:53.877 [2024-12-12 10:09:07.470878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.470904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.470957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.470970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:53.877 [2024-12-12 10:09:07.471021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfcfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:53.877 [2024-12-12 10:09:07.471053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:53.877 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:53.877 #29 NEW cov: 12418 ft: 14520 corp: 13/372b lim: 45 exec/s: 0 rss: 74Mb L: 30/41 MS: 1 InsertByte- 00:06:54.136 [2024-12-12 10:09:07.531024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.136 [2024-12-12 10:09:07.531049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.136 [2024-12-12 10:09:07.531103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.531116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.531166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.531179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.137 #30 NEW cov: 12418 ft: 14559 corp: 14/404b lim: 45 exec/s: 0 rss: 74Mb L: 32/41 MS: 1 CopyPart- 00:06:54.137 [2024-12-12 10:09:07.571128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.571154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.571223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.571236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.571289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:492d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.571302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.137 #31 NEW cov: 12418 ft: 14659 corp: 15/437b lim: 45 exec/s: 31 rss: 74Mb L: 33/41 MS: 1 InsertByte- 00:06:54.137 [2024-12-12 10:09:07.631166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.631191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.631244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49124949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.631258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.137 #32 NEW cov: 12418 ft: 14678 corp: 16/462b lim: 45 exec/s: 32 rss: 74Mb L: 25/41 MS: 1 ChangeByte- 00:06:54.137 [2024-12-12 10:09:07.671280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.671305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.671358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.671374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.137 #33 NEW cov: 12418 ft: 14692 corp: 17/480b lim: 45 exec/s: 33 rss: 74Mb L: 18/41 MS: 1 EraseBytes- 00:06:54.137 [2024-12-12 10:09:07.711544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.711569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.711639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.711652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.711705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:47000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.711723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.137 #34 NEW cov: 12418 ft: 14768 corp: 18/512b lim: 45 exec/s: 34 rss: 74Mb L: 32/41 MS: 1 CMP- DE: "G\000\000\000\000\000\000\000"- 00:06:54.137 [2024-12-12 10:09:07.751651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.751674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.751731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.751745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.137 [2024-12-12 10:09:07.751810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.137 [2024-12-12 10:09:07.751824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.137 #35 NEW cov: 12418 ft: 14796 corp: 19/544b lim: 45 exec/s: 35 rss: 74Mb L: 32/41 MS: 1 ChangeByte- 00:06:54.396 [2024-12-12 10:09:07.791792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.791817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.791869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.791882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.791933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.791946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.396 #36 NEW cov: 12418 ft: 14843 corp: 20/576b lim: 45 exec/s: 36 rss: 74Mb L: 32/41 MS: 1 ShuffleBytes- 00:06:54.396 [2024-12-12 10:09:07.851962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.851986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.852037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.852055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.852107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfcfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.852121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.396 #37 NEW cov: 12418 ft: 14853 corp: 21/604b lim: 45 exec/s: 37 rss: 74Mb L: 28/41 MS: 1 ShuffleBytes- 00:06:54.396 [2024-12-12 10:09:07.892243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.892268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.892321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.892335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.892384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.892397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.892447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:497c4949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.892460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.396 #38 NEW cov: 12418 ft: 14876 corp: 22/645b lim: 45 exec/s: 38 rss: 74Mb L: 41/41 MS: 1 ShuffleBytes- 00:06:54.396 [2024-12-12 10:09:07.952225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.952250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.952318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.396 [2024-12-12 10:09:07.952333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.396 [2024-12-12 10:09:07.952392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfcfdfdf cdw11:df790006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.397 [2024-12-12 10:09:07.952405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.397 #39 NEW cov: 12418 ft: 14892 corp: 23/673b lim: 45 exec/s: 39 rss: 74Mb L: 28/41 MS: 1 ChangeByte- 00:06:54.397 [2024-12-12 10:09:08.012231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.397 [2024-12-12 10:09:08.012254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.397 [2024-12-12 10:09:08.012308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:7c490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.397 [2024-12-12 10:09:08.012322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.656 #40 NEW cov: 12418 ft: 14939 corp: 24/697b lim: 45 exec/s: 40 rss: 74Mb L: 24/41 MS: 1 EraseBytes- 00:06:54.656 [2024-12-12 10:09:08.072577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.072601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.072652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.072665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.072720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfcfdfdf cdw11:df790006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.072734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.656 #41 NEW cov: 12418 ft: 14946 corp: 25/725b lim: 45 exec/s: 41 rss: 74Mb L: 28/41 MS: 1 ChangeBinInt- 00:06:54.656 [2024-12-12 10:09:08.132748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.132772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.132824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.132838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.132887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfdfdfdf cdw11:0adf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.132900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.656 #47 NEW cov: 12418 ft: 14973 corp: 26/753b lim: 45 exec/s: 47 rss: 74Mb L: 28/41 MS: 1 CopyPart- 00:06:54.656 [2024-12-12 10:09:08.172876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.172900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.172952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.172965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.173015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:492d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.173028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.656 #48 NEW cov: 12418 ft: 15040 corp: 27/786b lim: 45 exec/s: 48 rss: 75Mb L: 33/41 MS: 1 ChangeByte- 00:06:54.656 [2024-12-12 10:09:08.233208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.233232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.233284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:4949ff49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.233298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.233353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.233366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.233415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00004700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.233428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:54.656 #49 NEW cov: 12418 ft: 15052 corp: 28/823b lim: 45 exec/s: 49 rss: 75Mb L: 37/41 MS: 1 InsertRepeatedBytes- 00:06:54.656 [2024-12-12 10:09:08.293231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20200a2b cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.293256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.293308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.293322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.656 [2024-12-12 10:09:08.293373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.656 [2024-12-12 10:09:08.293386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.916 #50 NEW cov: 12418 ft: 15064 corp: 29/851b lim: 45 exec/s: 50 rss: 75Mb L: 28/41 MS: 1 ChangeBinInt- 00:06:54.916 [2024-12-12 10:09:08.333316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.333340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.333393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfb6dfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.333407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.333455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfdfdfdf cdw11:0adf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.333469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.916 #51 NEW cov: 12418 ft: 15080 corp: 30/879b lim: 45 exec/s: 51 rss: 75Mb L: 28/41 MS: 1 ChangeByte- 00:06:54.916 [2024-12-12 10:09:08.393507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.393530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.393583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.393596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.393647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfdfdf0a cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.393660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.916 #52 NEW cov: 12418 ft: 15093 corp: 31/907b lim: 45 exec/s: 52 rss: 75Mb L: 28/41 MS: 1 ShuffleBytes- 00:06:54.916 [2024-12-12 10:09:08.433602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.433626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.433679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.433693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.433747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfdfdf0a cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.433761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.916 #53 NEW cov: 12418 ft: 15110 corp: 32/935b lim: 45 exec/s: 53 rss: 75Mb L: 28/41 MS: 1 ChangeBinInt- 00:06:54.916 [2024-12-12 10:09:08.493756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:dfdf0adf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.493780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.493849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:dfdfdfdf cdw11:dfdf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.493863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.493915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:dfcfdfdf cdw11:df790006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.493928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:54.916 #54 NEW cov: 12418 ft: 15140 corp: 33/963b lim: 45 exec/s: 54 rss: 75Mb L: 28/41 MS: 1 ShuffleBytes- 00:06:54.916 [2024-12-12 10:09:08.533843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:49490a49 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.533867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.533921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.533934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:54.916 [2024-12-12 10:09:08.533986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49490002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:54.916 [2024-12-12 10:09:08.533999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:55.175 #55 NEW cov: 12418 ft: 15148 corp: 34/995b lim: 45 exec/s: 27 rss: 75Mb L: 32/41 MS: 1 ChangeByte- 00:06:55.175 #55 DONE cov: 12418 ft: 15148 corp: 34/995b lim: 45 exec/s: 27 rss: 75Mb 00:06:55.175 ###### Recommended dictionary. ###### 00:06:55.175 "G\000\000\000\000\000\000\000" # Uses: 0 00:06:55.175 ###### End of recommended dictionary. ###### 00:06:55.175 Done 55 runs in 2 second(s) 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.175 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.176 10:09:08 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:55.176 [2024-12-12 10:09:08.726136] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:55.176 [2024-12-12 10:09:08.726225] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471824 ] 00:06:55.435 [2024-12-12 10:09:08.999945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.435 [2024-12-12 10:09:09.054642] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.693 [2024-12-12 10:09:09.114147] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.693 [2024-12-12 10:09:09.130468] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:55.693 INFO: Running with entropic power schedule (0xFF, 100). 00:06:55.693 INFO: Seed: 4166390880 00:06:55.693 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:55.693 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:55.693 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:55.693 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.693 #2 INITED exec/s: 0 rss: 65Mb 00:06:55.693 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.693 This may also happen if the target rejected all inputs we tried so far 00:06:55.693 [2024-12-12 10:09:09.197580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:55.693 [2024-12-12 10:09:09.197620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.693 [2024-12-12 10:09:09.197704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.693 [2024-12-12 10:09:09.197726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.952 NEW_FUNC[1/715]: 0x446728 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:55.952 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:55.952 #3 NEW cov: 12101 ft: 12097 corp: 2/6b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "s\000\000\000"- 00:06:55.952 [2024-12-12 10:09:09.537012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:55.952 [2024-12-12 10:09:09.537068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.952 [2024-12-12 10:09:09.537147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.952 [2024-12-12 10:09:09.537173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.952 #4 NEW cov: 12221 ft: 12905 corp: 3/11b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 PersAutoDict- DE: "s\000\000\000"- 00:06:55.952 [2024-12-12 10:09:09.576913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:55.952 [2024-12-12 10:09:09.576938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:55.952 [2024-12-12 10:09:09.576990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.952 [2024-12-12 10:09:09.577003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:55.952 [2024-12-12 10:09:09.577052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:55.952 [2024-12-12 10:09:09.577064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.210 #5 NEW cov: 12227 ft: 13477 corp: 4/18b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:56.210 [2024-12-12 10:09:09.617020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.210 [2024-12-12 10:09:09.617045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.210 [2024-12-12 10:09:09.617094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.210 [2024-12-12 10:09:09.617107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.210 [2024-12-12 10:09:09.617157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:56.210 [2024-12-12 10:09:09.617170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.210 #6 NEW cov: 12312 ft: 13655 corp: 5/24b lim: 10 exec/s: 0 rss: 73Mb L: 6/7 MS: 1 InsertByte- 00:06:56.210 [2024-12-12 10:09:09.677098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.210 [2024-12-12 10:09:09.677122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.210 [2024-12-12 10:09:09.677174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:56.210 [2024-12-12 10:09:09.677187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.210 #7 NEW cov: 12312 ft: 13757 corp: 6/29b lim: 10 exec/s: 0 rss: 73Mb L: 5/7 MS: 1 ChangeBinInt- 00:06:56.210 [2024-12-12 10:09:09.737448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.211 [2024-12-12 10:09:09.737473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.211 [2024-12-12 10:09:09.737526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.211 [2024-12-12 10:09:09.737539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.211 [2024-12-12 10:09:09.737589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.211 [2024-12-12 10:09:09.737602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.211 [2024-12-12 10:09:09.737650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:56.211 [2024-12-12 10:09:09.737663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.211 #8 NEW cov: 12312 ft: 14022 corp: 7/38b lim: 10 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 PersAutoDict- DE: "s\000\000\000"- 00:06:56.211 [2024-12-12 10:09:09.797504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.211 [2024-12-12 10:09:09.797528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.211 [2024-12-12 10:09:09.797580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.211 [2024-12-12 10:09:09.797593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.211 [2024-12-12 10:09:09.797644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.211 [2024-12-12 10:09:09.797673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.211 #9 NEW cov: 12312 ft: 14061 corp: 8/45b lim: 10 exec/s: 0 rss: 73Mb L: 7/9 MS: 1 PersAutoDict- DE: "s\000\000\000"- 00:06:56.469 [2024-12-12 10:09:09.857729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.469 [2024-12-12 10:09:09.857753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.469 [2024-12-12 10:09:09.857805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.469 [2024-12-12 10:09:09.857818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.469 [2024-12-12 10:09:09.857867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.469 [2024-12-12 10:09:09.857879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.469 #10 NEW cov: 12312 ft: 14087 corp: 9/51b lim: 10 exec/s: 0 rss: 73Mb L: 6/9 MS: 1 CopyPart- 00:06:56.469 [2024-12-12 10:09:09.917875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.469 [2024-12-12 10:09:09.917899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.469 [2024-12-12 10:09:09.917948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.469 [2024-12-12 10:09:09.917961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.469 [2024-12-12 10:09:09.918010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.469 [2024-12-12 10:09:09.918023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.469 #11 NEW cov: 12312 ft: 14128 corp: 10/57b lim: 10 exec/s: 0 rss: 73Mb L: 6/9 MS: 1 CrossOver- 00:06:56.469 [2024-12-12 10:09:09.977943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005300 cdw11:00000000 00:06:56.469 [2024-12-12 10:09:09.977967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.469 [2024-12-12 10:09:09.978018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:09.978031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.470 #12 NEW cov: 12312 ft: 14247 corp: 11/62b lim: 10 exec/s: 0 rss: 73Mb L: 5/9 MS: 1 ChangeBit- 00:06:56.470 [2024-12-12 10:09:10.018175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000db00 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.018200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.470 [2024-12-12 10:09:10.018253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.018266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.470 [2024-12-12 10:09:10.018318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.018332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.470 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:56.470 #13 NEW cov: 12335 ft: 14318 corp: 12/68b lim: 10 exec/s: 0 rss: 74Mb L: 6/9 MS: 1 ChangeByte- 00:06:56.470 [2024-12-12 10:09:10.078564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.078589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.470 [2024-12-12 10:09:10.078642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.078656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.470 [2024-12-12 10:09:10.078708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.078726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.470 [2024-12-12 10:09:10.078775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.078787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.470 [2024-12-12 10:09:10.078838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:56.470 [2024-12-12 10:09:10.078851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.470 #14 NEW cov: 12335 ft: 14402 corp: 13/78b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:56.728 [2024-12-12 10:09:10.118313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.118337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.118404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff01 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.118418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.729 #15 NEW cov: 12335 ft: 14423 corp: 14/83b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 ChangeBinInt- 00:06:56.729 [2024-12-12 10:09:10.158574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005300 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.158598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.158651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.158664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.158719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.158733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.729 #16 NEW cov: 12335 ft: 14459 corp: 15/89b lim: 10 exec/s: 16 rss: 74Mb L: 6/10 MS: 1 ChangeBit- 00:06:56.729 [2024-12-12 10:09:10.218618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.218642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.218694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff01 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.218708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.729 #17 NEW cov: 12335 ft: 14478 corp: 16/94b lim: 10 exec/s: 17 rss: 74Mb L: 5/10 MS: 1 ChangeByte- 00:06:56.729 [2024-12-12 10:09:10.279135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.279159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.279227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.279240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.279292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.279304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.279355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.279368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.279418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000909 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.279432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:56.729 #18 NEW cov: 12335 ft: 14497 corp: 17/104b lim: 10 exec/s: 18 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:56.729 [2024-12-12 10:09:10.319014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004300 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.319038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.319088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.319101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.729 [2024-12-12 10:09:10.319157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.729 [2024-12-12 10:09:10.319170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.729 #19 NEW cov: 12335 ft: 14507 corp: 18/110b lim: 10 exec/s: 19 rss: 74Mb L: 6/10 MS: 1 ChangeBit- 00:06:56.988 [2024-12-12 10:09:10.379053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.379077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.379147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000001ff cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.379161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.988 #20 NEW cov: 12335 ft: 14540 corp: 19/115b lim: 10 exec/s: 20 rss: 74Mb L: 5/10 MS: 1 ShuffleBytes- 00:06:56.988 [2024-12-12 10:09:10.419325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007304 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.419349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.419402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000f601 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.419416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.419467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fff5 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.419479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.988 #21 NEW cov: 12335 ft: 14547 corp: 20/121b lim: 10 exec/s: 21 rss: 74Mb L: 6/10 MS: 1 InsertByte- 00:06:56.988 [2024-12-12 10:09:10.479378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.479402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.479454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000001ff cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.479467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.988 #22 NEW cov: 12335 ft: 14557 corp: 21/126b lim: 10 exec/s: 22 rss: 74Mb L: 5/10 MS: 1 ChangeByte- 00:06:56.988 [2024-12-12 10:09:10.519662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.519688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.519744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff73 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.519758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.519809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f6ff cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.519822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.519871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff09 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.519884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.988 #23 NEW cov: 12335 ft: 14602 corp: 22/135b lim: 10 exec/s: 23 rss: 74Mb L: 9/10 MS: 1 EraseBytes- 00:06:56.988 [2024-12-12 10:09:10.579689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.579719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.579772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.579785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.988 #24 NEW cov: 12335 ft: 14609 corp: 23/140b lim: 10 exec/s: 24 rss: 74Mb L: 5/10 MS: 1 PersAutoDict- DE: "s\000\000\000"- 00:06:56.988 [2024-12-12 10:09:10.620110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.620135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.620188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00007300 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.620201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.620251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.620264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.620313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.620326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:56.988 [2024-12-12 10:09:10.620379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:06:56.988 [2024-12-12 10:09:10.620392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.247 #25 NEW cov: 12335 ft: 14626 corp: 24/150b lim: 10 exec/s: 25 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:06:57.247 [2024-12-12 10:09:10.680174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000273 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.680198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.680249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000073 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.680262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.680313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.680342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.680394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.680407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.247 #28 NEW cov: 12335 ft: 14630 corp: 25/159b lim: 10 exec/s: 28 rss: 74Mb L: 9/10 MS: 3 ChangeBit-ShuffleBytes-CrossOver- 00:06:57.247 [2024-12-12 10:09:10.720311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073f6 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.720339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.720392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.720405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.720457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000073ad cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.720469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.720521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.720534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.720582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000909 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.720595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.247 #29 NEW cov: 12335 ft: 14672 corp: 26/169b lim: 10 exec/s: 29 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:06:57.247 [2024-12-12 10:09:10.760418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.760443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.760496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.760510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.760558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007300 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.760572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.760622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.760635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.760685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a46 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.760698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.247 #30 NEW cov: 12335 ft: 14688 corp: 27/179b lim: 10 exec/s: 30 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:06:57.247 [2024-12-12 10:09:10.820414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.820439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.820506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.820519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.247 [2024-12-12 10:09:10.820570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.247 [2024-12-12 10:09:10.820583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.248 #31 NEW cov: 12335 ft: 14735 corp: 28/186b lim: 10 exec/s: 31 rss: 75Mb L: 7/10 MS: 1 CrossOver- 00:06:57.248 [2024-12-12 10:09:10.860547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000dbdb cdw11:00000000 00:06:57.248 [2024-12-12 10:09:10.860571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.248 [2024-12-12 10:09:10.860624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000073 cdw11:00000000 00:06:57.248 [2024-12-12 10:09:10.860637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.248 [2024-12-12 10:09:10.860690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.248 [2024-12-12 10:09:10.860703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.506 #32 NEW cov: 12335 ft: 14743 corp: 29/192b lim: 10 exec/s: 32 rss: 75Mb L: 6/10 MS: 1 CopyPart- 00:06:57.506 [2024-12-12 10:09:10.920478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:57.506 [2024-12-12 10:09:10.920502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.506 #33 NEW cov: 12335 ft: 14936 corp: 30/194b lim: 10 exec/s: 33 rss: 75Mb L: 2/10 MS: 1 CrossOver- 00:06:57.506 [2024-12-12 10:09:10.960592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073ff cdw11:00000000 00:06:57.506 [2024-12-12 10:09:10.960616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.506 #34 NEW cov: 12335 ft: 14958 corp: 31/197b lim: 10 exec/s: 34 rss: 75Mb L: 3/10 MS: 1 EraseBytes- 00:06:57.506 [2024-12-12 10:09:11.021012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007300 cdw11:00000000 00:06:57.506 [2024-12-12 10:09:11.021037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.506 [2024-12-12 10:09:11.021089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.021102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.021153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.021166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.507 #35 NEW cov: 12335 ft: 14978 corp: 32/203b lim: 10 exec/s: 35 rss: 75Mb L: 6/10 MS: 1 CrossOver- 00:06:57.507 [2024-12-12 10:09:11.061226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000073d2 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.061249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.061300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d2d2 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.061314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.061364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d200 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.061377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.061426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.061438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.507 #36 NEW cov: 12335 ft: 15010 corp: 33/212b lim: 10 exec/s: 36 rss: 75Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:57.507 [2024-12-12 10:09:11.101099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007376 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.101124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.101175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff01 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.101188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.507 #37 NEW cov: 12335 ft: 15014 corp: 34/217b lim: 10 exec/s: 37 rss: 75Mb L: 5/10 MS: 1 ChangeBit- 00:06:57.507 [2024-12-12 10:09:11.141583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.141608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.141675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000032 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.141689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.141753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007300 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.141766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.141831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.141844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:57.507 [2024-12-12 10:09:11.141896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:57.507 [2024-12-12 10:09:11.141909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:57.766 #38 NEW cov: 12335 ft: 15030 corp: 35/227b lim: 10 exec/s: 38 rss: 75Mb L: 10/10 MS: 1 ChangeByte- 00:06:57.766 [2024-12-12 10:09:11.181378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000db00 cdw11:00000000 00:06:57.766 [2024-12-12 10:09:11.181402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:57.766 [2024-12-12 10:09:11.181454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000db73 cdw11:00000000 00:06:57.766 [2024-12-12 10:09:11.181467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:57.766 [2024-12-12 10:09:11.181519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:57.766 [2024-12-12 10:09:11.181532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:57.766 #39 NEW cov: 12335 ft: 15045 corp: 36/233b lim: 10 exec/s: 19 rss: 75Mb L: 6/10 MS: 1 ShuffleBytes- 00:06:57.766 #39 DONE cov: 12335 ft: 15045 corp: 36/233b lim: 10 exec/s: 19 rss: 75Mb 00:06:57.766 ###### Recommended dictionary. ###### 00:06:57.766 "s\000\000\000" # Uses: 4 00:06:57.766 ###### End of recommended dictionary. ###### 00:06:57.766 Done 39 runs in 2 second(s) 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:57.767 10:09:11 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:57.767 [2024-12-12 10:09:11.377078] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:06:57.767 [2024-12-12 10:09:11.377152] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472357 ] 00:06:58.025 [2024-12-12 10:09:11.651914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.284 [2024-12-12 10:09:11.706586] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.284 [2024-12-12 10:09:11.765523] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.284 [2024-12-12 10:09:11.781842] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:58.284 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.284 INFO: Seed: 2522425689 00:06:58.284 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:06:58.284 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:06:58.284 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:58.284 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.284 #2 INITED exec/s: 0 rss: 65Mb 00:06:58.284 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.284 This may also happen if the target rejected all inputs we tried so far 00:06:58.284 [2024-12-12 10:09:11.836617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a62 cdw11:00000000 00:06:58.284 [2024-12-12 10:09:11.836650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.852 NEW_FUNC[1/715]: 0x447128 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:58.852 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:58.852 #11 NEW cov: 12108 ft: 12102 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 4 ShuffleBytes-ShuffleBytes-ChangeByte-CrossOver- 00:06:58.852 [2024-12-12 10:09:12.207588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003d3d cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.207626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.852 #16 NEW cov: 12221 ft: 12516 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 5 ShuffleBytes-ChangeByte-ChangeBit-ChangeBit-CopyPart- 00:06:58.852 [2024-12-12 10:09:12.257550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.257580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.852 #17 NEW cov: 12227 ft: 12932 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:06:58.852 [2024-12-12 10:09:12.307803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.307831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.852 [2024-12-12 10:09:12.307877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.307892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.852 [2024-12-12 10:09:12.307919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.307934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.852 #18 NEW cov: 12312 ft: 13492 corp: 5/14b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:58.852 [2024-12-12 10:09:12.398035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.398065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.852 [2024-12-12 10:09:12.398110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.398126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:58.852 [2024-12-12 10:09:12.398152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.398168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:58.852 #19 NEW cov: 12312 ft: 13611 corp: 6/21b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 ChangeByte- 00:06:58.852 [2024-12-12 10:09:12.488282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.488314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:58.852 [2024-12-12 10:09:12.488347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e77b cdw11:00000000 00:06:58.852 [2024-12-12 10:09:12.488364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.110 #20 NEW cov: 12312 ft: 13839 corp: 7/25b lim: 10 exec/s: 0 rss: 73Mb L: 4/7 MS: 1 EraseBytes- 00:06:59.110 [2024-12-12 10:09:12.578582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:59.110 [2024-12-12 10:09:12.578610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.110 [2024-12-12 10:09:12.578656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000102 cdw11:00000000 00:06:59.110 [2024-12-12 10:09:12.578679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.110 [2024-12-12 10:09:12.578706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003cfa cdw11:00000000 00:06:59.110 [2024-12-12 10:09:12.578728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.111 [2024-12-12 10:09:12.578756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000f35 cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.578770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.111 [2024-12-12 10:09:12.578796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000edf0 cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.578811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.111 #21 NEW cov: 12312 ft: 14176 corp: 8/35b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\001\002<\372\0175\355\360"- 00:06:59.111 [2024-12-12 10:09:12.668648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003d0a cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.668677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.111 #23 NEW cov: 12312 ft: 14274 corp: 9/37b lim: 10 exec/s: 0 rss: 73Mb L: 2/10 MS: 2 ShuffleBytes-CrossOver- 00:06:59.111 [2024-12-12 10:09:12.718969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.718998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.111 [2024-12-12 10:09:12.719043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008b7b cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.719058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.111 [2024-12-12 10:09:12.719084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006023 cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.719099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.111 [2024-12-12 10:09:12.719125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000fa3c cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.719140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.111 [2024-12-12 10:09:12.719166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000200 cdw11:00000000 00:06:59.111 [2024-12-12 10:09:12.719181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.369 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:06:59.369 #24 NEW cov: 12329 ft: 14344 corp: 10/47b lim: 10 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 CMP- DE: "\213{`#\372<\002\000"- 00:06:59.369 [2024-12-12 10:09:12.819771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:59.369 [2024-12-12 10:09:12.819798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.369 [2024-12-12 10:09:12.819862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003d7b cdw11:00000000 00:06:59.369 [2024-12-12 10:09:12.819876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.369 #25 NEW cov: 12329 ft: 14443 corp: 11/51b lim: 10 exec/s: 25 rss: 73Mb L: 4/10 MS: 1 ShuffleBytes- 00:06:59.369 [2024-12-12 10:09:12.880184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.369 [2024-12-12 10:09:12.880209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.369 [2024-12-12 10:09:12.880279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.369 [2024-12-12 10:09:12.880293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.369 [2024-12-12 10:09:12.880345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.369 [2024-12-12 10:09:12.880358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.369 [2024-12-12 10:09:12.880412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffe7 cdw11:00000000 00:06:59.369 [2024-12-12 10:09:12.880425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.369 #26 NEW cov: 12329 ft: 14513 corp: 12/60b lim: 10 exec/s: 26 rss: 73Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:06:59.369 [2024-12-12 10:09:12.920313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.369 [2024-12-12 10:09:12.920339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.370 [2024-12-12 10:09:12.920394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff7f cdw11:00000000 00:06:59.370 [2024-12-12 10:09:12.920408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.370 [2024-12-12 10:09:12.920460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.370 [2024-12-12 10:09:12.920473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.370 [2024-12-12 10:09:12.920526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffe7 cdw11:00000000 00:06:59.370 [2024-12-12 10:09:12.920539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.370 #27 NEW cov: 12329 ft: 14523 corp: 13/69b lim: 10 exec/s: 27 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:06:59.370 [2024-12-12 10:09:12.980142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:59.370 [2024-12-12 10:09:12.980166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.628 #28 NEW cov: 12329 ft: 14550 corp: 14/71b lim: 10 exec/s: 28 rss: 73Mb L: 2/10 MS: 1 EraseBytes- 00:06:59.628 [2024-12-12 10:09:13.040620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.628 [2024-12-12 10:09:13.040644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.628 [2024-12-12 10:09:13.040719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffdf cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.040734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.040796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.040809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.040858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffe7 cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.040874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.629 #29 NEW cov: 12329 ft: 14572 corp: 15/80b lim: 10 exec/s: 29 rss: 73Mb L: 9/10 MS: 1 ChangeByte- 00:06:59.629 [2024-12-12 10:09:13.100912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.100937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.101006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.101020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.101072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.101086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.101138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.101151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.101203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00003d3d cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.101216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.629 #30 NEW cov: 12329 ft: 14596 corp: 16/90b lim: 10 exec/s: 30 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:59.629 [2024-12-12 10:09:13.140555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fe0a cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.140579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.629 #31 NEW cov: 12329 ft: 14635 corp: 17/92b lim: 10 exec/s: 31 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:06:59.629 [2024-12-12 10:09:13.180639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.180664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.629 #32 NEW cov: 12329 ft: 14706 corp: 18/94b lim: 10 exec/s: 32 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:06:59.629 [2024-12-12 10:09:13.220873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.220897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.220966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.220980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.629 #33 NEW cov: 12329 ft: 14763 corp: 19/99b lim: 10 exec/s: 33 rss: 73Mb L: 5/10 MS: 1 EraseBytes- 00:06:59.629 [2024-12-12 10:09:13.261095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.261119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.261190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.261203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.629 [2024-12-12 10:09:13.261260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:59.629 [2024-12-12 10:09:13.261273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.888 #34 NEW cov: 12329 ft: 14769 corp: 20/106b lim: 10 exec/s: 34 rss: 73Mb L: 7/10 MS: 1 ChangeByte- 00:06:59.888 [2024-12-12 10:09:13.301324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.301347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.301416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfff cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.301429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.301482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.301495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.301549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffe7 cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.301562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.888 #35 NEW cov: 12329 ft: 14817 corp: 21/115b lim: 10 exec/s: 35 rss: 73Mb L: 9/10 MS: 1 ChangeBit- 00:06:59.888 [2024-12-12 10:09:13.341090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e7e7 cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.341114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.888 #36 NEW cov: 12329 ft: 14838 corp: 22/118b lim: 10 exec/s: 36 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:06:59.888 [2024-12-12 10:09:13.401383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000fe0a cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.401407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.401474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a62 cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.401488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.888 #37 NEW cov: 12329 ft: 14865 corp: 23/122b lim: 10 exec/s: 37 rss: 73Mb L: 4/10 MS: 1 CrossOver- 00:06:59.888 [2024-12-12 10:09:13.461911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.461935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.461990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffdf cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.462003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.462056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.462069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.462122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000040ff cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.462135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.462189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000e77b cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.462202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:59.888 #38 NEW cov: 12329 ft: 14870 corp: 24/132b lim: 10 exec/s: 38 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:06:59.888 [2024-12-12 10:09:13.521753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.521778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:59.888 [2024-12-12 10:09:13.521833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e77a cdw11:00000000 00:06:59.888 [2024-12-12 10:09:13.521847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.147 #39 NEW cov: 12329 ft: 14877 corp: 25/136b lim: 10 exec/s: 39 rss: 73Mb L: 4/10 MS: 1 ChangeBit- 00:07:00.147 [2024-12-12 10:09:13.561722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a08 cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.561747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.147 #40 NEW cov: 12329 ft: 14881 corp: 26/138b lim: 10 exec/s: 40 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:00.147 [2024-12-12 10:09:13.622056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.622081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.622151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003d08 cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.622165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.147 #41 NEW cov: 12329 ft: 14906 corp: 27/142b lim: 10 exec/s: 41 rss: 74Mb L: 4/10 MS: 1 CrossOver- 00:07:00.147 [2024-12-12 10:09:13.662360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.662385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.662441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000023c cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.662455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.662523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fa0f cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.662537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.662589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000035ed cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.662602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.147 #42 NEW cov: 12329 ft: 14922 corp: 28/151b lim: 10 exec/s: 42 rss: 74Mb L: 9/10 MS: 1 PersAutoDict- DE: "\001\002<\372\0175\355\360"- 00:07:00.147 [2024-12-12 10:09:13.702580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.702604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.702657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000bfff cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.702674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.702731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.702744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.702795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff7b cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.702808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.147 [2024-12-12 10:09:13.702858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000e77b cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.702871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:00.147 #43 NEW cov: 12336 ft: 14936 corp: 29/161b lim: 10 exec/s: 43 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:07:00.147 [2024-12-12 10:09:13.762252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008a31 cdw11:00000000 00:07:00.147 [2024-12-12 10:09:13.762276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.147 #46 NEW cov: 12336 ft: 14943 corp: 30/163b lim: 10 exec/s: 46 rss: 74Mb L: 2/10 MS: 3 ChangeBit-CopyPart-InsertByte- 00:07:00.407 [2024-12-12 10:09:13.802601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003de7 cdw11:00000000 00:07:00.407 [2024-12-12 10:09:13.802625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.407 [2024-12-12 10:09:13.802680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e700 cdw11:00000000 00:07:00.407 [2024-12-12 10:09:13.802694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.407 [2024-12-12 10:09:13.802765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:00.407 [2024-12-12 10:09:13.802778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.407 #47 NEW cov: 12336 ft: 14954 corp: 31/170b lim: 10 exec/s: 23 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:00.407 #47 DONE cov: 12336 ft: 14954 corp: 31/170b lim: 10 exec/s: 23 rss: 74Mb 00:07:00.407 ###### Recommended dictionary. ###### 00:07:00.407 "\001\002<\372\0175\355\360" # Uses: 1 00:07:00.407 "\213{`#\372<\002\000" # Uses: 0 00:07:00.407 ###### End of recommended dictionary. ###### 00:07:00.407 Done 47 runs in 2 second(s) 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:00.407 10:09:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:00.407 [2024-12-12 10:09:13.997169] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:00.407 [2024-12-12 10:09:13.997243] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472825 ] 00:07:00.666 [2024-12-12 10:09:14.272719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.925 [2024-12-12 10:09:14.330499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.925 [2024-12-12 10:09:14.389796] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.925 [2024-12-12 10:09:14.406136] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:00.925 INFO: Running with entropic power schedule (0xFF, 100). 00:07:00.925 INFO: Seed: 850458021 00:07:00.925 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:00.925 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:00.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:00.925 INFO: A corpus is not provided, starting from an empty corpus 00:07:00.925 [2024-12-12 10:09:14.465391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.925 [2024-12-12 10:09:14.465420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.925 #2 INITED cov: 12106 ft: 12103 corp: 1/1b exec/s: 0 rss: 71Mb 00:07:00.925 [2024-12-12 10:09:14.506051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.925 [2024-12-12 10:09:14.506078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:00.925 [2024-12-12 10:09:14.506138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.925 [2024-12-12 10:09:14.506152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:00.925 [2024-12-12 10:09:14.506208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.925 [2024-12-12 10:09:14.506221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:00.925 [2024-12-12 10:09:14.506280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.925 [2024-12-12 10:09:14.506293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:00.925 [2024-12-12 10:09:14.506352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:00.925 [2024-12-12 10:09:14.506366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.492 NEW_FUNC[1/3]: 0x17d0c28 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3959 00:07:01.492 NEW_FUNC[2/3]: 0x19acf58 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1615 00:07:01.492 #3 NEW cov: 12248 ft: 13511 corp: 2/6b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:01.492 [2024-12-12 10:09:14.846426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.492 [2024-12-12 10:09:14.846479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.492 #4 NEW cov: 12254 ft: 13899 corp: 3/7b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:01.492 [2024-12-12 10:09:14.896306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.492 [2024-12-12 10:09:14.896333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.492 #5 NEW cov: 12339 ft: 14191 corp: 4/8b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:07:01.492 [2024-12-12 10:09:14.936456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.492 [2024-12-12 10:09:14.936480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.492 #6 NEW cov: 12339 ft: 14359 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:01.492 [2024-12-12 10:09:14.996600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.492 [2024-12-12 10:09:14.996624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.492 #7 NEW cov: 12339 ft: 14419 corp: 6/10b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:07:01.492 [2024-12-12 10:09:15.056789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.492 [2024-12-12 10:09:15.056813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.492 #8 NEW cov: 12339 ft: 14467 corp: 7/11b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:07:01.492 [2024-12-12 10:09:15.096868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.492 [2024-12-12 10:09:15.096892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.750 #9 NEW cov: 12339 ft: 14475 corp: 8/12b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:01.750 [2024-12-12 10:09:15.157677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.750 [2024-12-12 10:09:15.157703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.750 [2024-12-12 10:09:15.157774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.750 [2024-12-12 10:09:15.157788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.750 [2024-12-12 10:09:15.157847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.750 [2024-12-12 10:09:15.157860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.750 [2024-12-12 10:09:15.157916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.750 [2024-12-12 10:09:15.157929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.750 [2024-12-12 10:09:15.157984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.750 [2024-12-12 10:09:15.157998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.750 #10 NEW cov: 12339 ft: 14529 corp: 9/17b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:07:01.750 [2024-12-12 10:09:15.217237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.217262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.751 #11 NEW cov: 12339 ft: 14582 corp: 10/18b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:07:01.751 [2024-12-12 10:09:15.257942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.257966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.751 [2024-12-12 10:09:15.258022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.258035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.751 [2024-12-12 10:09:15.258091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.258104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:01.751 [2024-12-12 10:09:15.258157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.258170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:01.751 [2024-12-12 10:09:15.258224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.258237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:01.751 #12 NEW cov: 12339 ft: 14650 corp: 11/23b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:07:01.751 [2024-12-12 10:09:15.297430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.297454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.751 #13 NEW cov: 12339 ft: 14674 corp: 12/24b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:07:01.751 [2024-12-12 10:09:15.337685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.337713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:01.751 [2024-12-12 10:09:15.337774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:01.751 [2024-12-12 10:09:15.337787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:01.751 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:01.751 #14 NEW cov: 12362 ft: 14899 corp: 13/26b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:02.010 [2024-12-12 10:09:15.397918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.397943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.397998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.398011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.010 #15 NEW cov: 12362 ft: 14926 corp: 14/28b lim: 5 exec/s: 0 rss: 74Mb L: 2/5 MS: 1 CrossOver- 00:07:02.010 [2024-12-12 10:09:15.458078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.458102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.458156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.458170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.010 #16 NEW cov: 12362 ft: 14984 corp: 15/30b lim: 5 exec/s: 16 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:02.010 [2024-12-12 10:09:15.498344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.498369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.498425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.498439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.498492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.498506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.010 #17 NEW cov: 12362 ft: 15145 corp: 16/33b lim: 5 exec/s: 17 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:07:02.010 [2024-12-12 10:09:15.558233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.558258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.010 #18 NEW cov: 12362 ft: 15154 corp: 17/34b lim: 5 exec/s: 18 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:07:02.010 [2024-12-12 10:09:15.598880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.598908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.598981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.598995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.599050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.599064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.599120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.599134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.010 [2024-12-12 10:09:15.599190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.599204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.010 #19 NEW cov: 12362 ft: 15172 corp: 18/39b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:02.010 [2024-12-12 10:09:15.638425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.010 [2024-12-12 10:09:15.638449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.269 #20 NEW cov: 12362 ft: 15189 corp: 19/40b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:02.269 [2024-12-12 10:09:15.679120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.679145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.679215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.679229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.679285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.679298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.679352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.679365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.679421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.679435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.269 #21 NEW cov: 12362 ft: 15198 corp: 20/45b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:07:02.269 [2024-12-12 10:09:15.738693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.738721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.269 #22 NEW cov: 12362 ft: 15223 corp: 21/46b lim: 5 exec/s: 22 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:02.269 [2024-12-12 10:09:15.779091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.779117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.779170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.779184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.779237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.779251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.269 #23 NEW cov: 12362 ft: 15234 corp: 22/49b lim: 5 exec/s: 23 rss: 74Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:02.269 [2024-12-12 10:09:15.839605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.839630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.839688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.839702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.839756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.839769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.839823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.839838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.839892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.839905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.269 #24 NEW cov: 12362 ft: 15256 corp: 23/54b lim: 5 exec/s: 24 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:02.269 [2024-12-12 10:09:15.879234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.879258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.269 [2024-12-12 10:09:15.879329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.269 [2024-12-12 10:09:15.879343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.269 #25 NEW cov: 12362 ft: 15264 corp: 24/56b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:02.528 [2024-12-12 10:09:15.919192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:15.919217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.528 #26 NEW cov: 12362 ft: 15288 corp: 25/57b lim: 5 exec/s: 26 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:02.528 [2024-12-12 10:09:15.959316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:15.959340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.528 #27 NEW cov: 12362 ft: 15295 corp: 26/58b lim: 5 exec/s: 27 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:02.528 [2024-12-12 10:09:16.020088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.020112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.020168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.020181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.020239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.020252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.020309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.020322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.020375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.020388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.528 #28 NEW cov: 12362 ft: 15323 corp: 27/63b lim: 5 exec/s: 28 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:07:02.528 [2024-12-12 10:09:16.060157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.060182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.060238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.060251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.060305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.060319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.060372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.060389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.528 [2024-12-12 10:09:16.060444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.060456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.528 #29 NEW cov: 12362 ft: 15327 corp: 28/68b lim: 5 exec/s: 29 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:02.528 [2024-12-12 10:09:16.119780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.528 [2024-12-12 10:09:16.119805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.528 #30 NEW cov: 12362 ft: 15345 corp: 29/69b lim: 5 exec/s: 30 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:02.529 [2024-12-12 10:09:16.160306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.529 [2024-12-12 10:09:16.160329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.529 [2024-12-12 10:09:16.160384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.529 [2024-12-12 10:09:16.160398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.529 [2024-12-12 10:09:16.160450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.529 [2024-12-12 10:09:16.160463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.529 [2024-12-12 10:09:16.160514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.529 [2024-12-12 10:09:16.160527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.787 #31 NEW cov: 12362 ft: 15392 corp: 30/73b lim: 5 exec/s: 31 rss: 74Mb L: 4/5 MS: 1 CopyPart- 00:07:02.787 [2024-12-12 10:09:16.200145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.200169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.200224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.200237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.787 #32 NEW cov: 12362 ft: 15406 corp: 31/75b lim: 5 exec/s: 32 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:02.787 [2024-12-12 10:09:16.260759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.260783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.260838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.260851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.260922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.260936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.260989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.261002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.261055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.261068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:02.787 #33 NEW cov: 12362 ft: 15419 corp: 32/80b lim: 5 exec/s: 33 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:02.787 [2024-12-12 10:09:16.300233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.300257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.787 #34 NEW cov: 12362 ft: 15447 corp: 33/81b lim: 5 exec/s: 34 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:02.787 [2024-12-12 10:09:16.340323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.340347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.787 #35 NEW cov: 12362 ft: 15467 corp: 34/82b lim: 5 exec/s: 35 rss: 74Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:02.787 [2024-12-12 10:09:16.401154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.401178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.401231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.401244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.401298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.401312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.401365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.401378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:02.787 [2024-12-12 10:09:16.401433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.787 [2024-12-12 10:09:16.401445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:03.046 #36 NEW cov: 12362 ft: 15478 corp: 35/87b lim: 5 exec/s: 36 rss: 75Mb L: 5/5 MS: 1 ChangeByte- 00:07:03.046 [2024-12-12 10:09:16.461195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.046 [2024-12-12 10:09:16.461221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.046 [2024-12-12 10:09:16.461276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.046 [2024-12-12 10:09:16.461289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.046 [2024-12-12 10:09:16.461343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.046 [2024-12-12 10:09:16.461356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.046 [2024-12-12 10:09:16.461411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.046 [2024-12-12 10:09:16.461424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.046 #37 NEW cov: 12362 ft: 15482 corp: 36/91b lim: 5 exec/s: 18 rss: 75Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:03.046 #37 DONE cov: 12362 ft: 15482 corp: 36/91b lim: 5 exec/s: 18 rss: 75Mb 00:07:03.046 Done 37 runs in 2 second(s) 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.046 10:09:16 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:03.046 [2024-12-12 10:09:16.636976] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:03.046 [2024-12-12 10:09:16.637050] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473179 ] 00:07:03.305 [2024-12-12 10:09:16.919118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.563 [2024-12-12 10:09:16.972185] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.563 [2024-12-12 10:09:17.031528] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.563 [2024-12-12 10:09:17.047871] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:03.563 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.563 INFO: Seed: 3493464989 00:07:03.563 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:03.564 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:03.564 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:03.564 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.564 [2024-12-12 10:09:17.092694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.564 [2024-12-12 10:09:17.092740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.564 #2 INITED cov: 12135 ft: 12134 corp: 1/1b exec/s: 0 rss: 72Mb 00:07:03.564 [2024-12-12 10:09:17.142711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.564 [2024-12-12 10:09:17.142753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.564 [2024-12-12 10:09:17.142801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.564 [2024-12-12 10:09:17.142817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.822 #3 NEW cov: 12248 ft: 13386 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:07:03.822 [2024-12-12 10:09:17.232992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.822 [2024-12-12 10:09:17.233024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.822 [2024-12-12 10:09:17.233072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.822 [2024-12-12 10:09:17.233088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.822 [2024-12-12 10:09:17.233117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.822 [2024-12-12 10:09:17.233133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.822 #4 NEW cov: 12254 ft: 13693 corp: 3/6b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:07:03.822 [2024-12-12 10:09:17.323129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.822 [2024-12-12 10:09:17.323157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.822 [2024-12-12 10:09:17.323204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.822 [2024-12-12 10:09:17.323221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.822 #5 NEW cov: 12339 ft: 14063 corp: 4/8b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 InsertByte- 00:07:03.822 [2024-12-12 10:09:17.383321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.822 [2024-12-12 10:09:17.383350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.822 [2024-12-12 10:09:17.383397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.823 [2024-12-12 10:09:17.383412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.823 #6 NEW cov: 12339 ft: 14267 corp: 5/10b lim: 5 exec/s: 0 rss: 72Mb L: 2/3 MS: 1 CrossOver- 00:07:03.823 [2024-12-12 10:09:17.443616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.823 [2024-12-12 10:09:17.443645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:03.823 [2024-12-12 10:09:17.443691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.823 [2024-12-12 10:09:17.443707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:03.823 [2024-12-12 10:09:17.443743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.823 [2024-12-12 10:09:17.443759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:03.823 [2024-12-12 10:09:17.443787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.823 [2024-12-12 10:09:17.443803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:03.823 [2024-12-12 10:09:17.443831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.823 [2024-12-12 10:09:17.443845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.081 #7 NEW cov: 12339 ft: 14760 corp: 6/15b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CopyPart- 00:07:04.081 [2024-12-12 10:09:17.533824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.081 [2024-12-12 10:09:17.533854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.081 [2024-12-12 10:09:17.533887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.081 [2024-12-12 10:09:17.533902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.081 [2024-12-12 10:09:17.533931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.081 [2024-12-12 10:09:17.533946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.081 #8 NEW cov: 12339 ft: 14832 corp: 7/18b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:07:04.082 [2024-12-12 10:09:17.594019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.594047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.082 [2024-12-12 10:09:17.594098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.594114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.082 [2024-12-12 10:09:17.594143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.594158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.082 [2024-12-12 10:09:17.594187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.594201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.082 [2024-12-12 10:09:17.594229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.594245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.082 #9 NEW cov: 12339 ft: 14891 corp: 8/23b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:04.082 [2024-12-12 10:09:17.653929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.653957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.082 #10 NEW cov: 12339 ft: 14926 corp: 9/24b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeByte- 00:07:04.082 [2024-12-12 10:09:17.714142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.714170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.082 [2024-12-12 10:09:17.714217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.082 [2024-12-12 10:09:17.714233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.340 #11 NEW cov: 12339 ft: 15002 corp: 10/26b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:07:04.340 [2024-12-12 10:09:17.804411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.340 [2024-12-12 10:09:17.804439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.340 [2024-12-12 10:09:17.804487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.340 [2024-12-12 10:09:17.804502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.340 #12 NEW cov: 12339 ft: 15070 corp: 11/28b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:07:04.340 [2024-12-12 10:09:17.894688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.340 [2024-12-12 10:09:17.894725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.340 [2024-12-12 10:09:17.894758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.340 [2024-12-12 10:09:17.894778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.340 #13 NEW cov: 12339 ft: 15103 corp: 12/30b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:07:04.340 [2024-12-12 10:09:17.944784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.340 [2024-12-12 10:09:17.944813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.340 [2024-12-12 10:09:17.944846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.340 [2024-12-12 10:09:17.944861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.857 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:04.857 #14 NEW cov: 12362 ft: 15144 corp: 13/32b lim: 5 exec/s: 14 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:07:04.857 [2024-12-12 10:09:18.316062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.857 [2024-12-12 10:09:18.316100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.857 [2024-12-12 10:09:18.316133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.857 [2024-12-12 10:09:18.316149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.857 [2024-12-12 10:09:18.316178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.858 [2024-12-12 10:09:18.316193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.858 [2024-12-12 10:09:18.316222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.858 [2024-12-12 10:09:18.316236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:04.858 [2024-12-12 10:09:18.316265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.858 [2024-12-12 10:09:18.316279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:04.858 #15 NEW cov: 12362 ft: 15184 corp: 14/37b lim: 5 exec/s: 15 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:04.858 [2024-12-12 10:09:18.406061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.858 [2024-12-12 10:09:18.406091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:04.858 [2024-12-12 10:09:18.406137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.858 [2024-12-12 10:09:18.406153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:04.858 [2024-12-12 10:09:18.406182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.858 [2024-12-12 10:09:18.406198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:04.858 #16 NEW cov: 12362 ft: 15198 corp: 15/40b lim: 5 exec/s: 16 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:07:05.116 [2024-12-12 10:09:18.496187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.496219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.116 #17 NEW cov: 12362 ft: 15219 corp: 16/41b lim: 5 exec/s: 17 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:07:05.116 [2024-12-12 10:09:18.546208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.546237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.116 #18 NEW cov: 12362 ft: 15281 corp: 17/42b lim: 5 exec/s: 18 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:07:05.116 [2024-12-12 10:09:18.606500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.606529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.116 [2024-12-12 10:09:18.606576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.606591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.116 [2024-12-12 10:09:18.606620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.606635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.116 #19 NEW cov: 12362 ft: 15329 corp: 18/45b lim: 5 exec/s: 19 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:07:05.116 [2024-12-12 10:09:18.666775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.666805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.116 [2024-12-12 10:09:18.666852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.666867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.116 [2024-12-12 10:09:18.666896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.666911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.116 [2024-12-12 10:09:18.666939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.116 [2024-12-12 10:09:18.666954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.116 [2024-12-12 10:09:18.666983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.117 [2024-12-12 10:09:18.666998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.117 #20 NEW cov: 12362 ft: 15354 corp: 19/50b lim: 5 exec/s: 20 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:05.375 [2024-12-12 10:09:18.757118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.757148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.375 [2024-12-12 10:09:18.757182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.757198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.375 [2024-12-12 10:09:18.757228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.757244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.375 [2024-12-12 10:09:18.757273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.757288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:05.375 [2024-12-12 10:09:18.757317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.757332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:05.375 #21 NEW cov: 12362 ft: 15383 corp: 20/55b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:07:05.375 [2024-12-12 10:09:18.847153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.847181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.375 [2024-12-12 10:09:18.847228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.847244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.375 [2024-12-12 10:09:18.847273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.847287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:05.375 #22 NEW cov: 12362 ft: 15425 corp: 21/58b lim: 5 exec/s: 22 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:07:05.375 [2024-12-12 10:09:18.937337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.375 [2024-12-12 10:09:18.937366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.376 [2024-12-12 10:09:18.937413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.376 [2024-12-12 10:09:18.937428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.635 #23 NEW cov: 12362 ft: 15454 corp: 22/60b lim: 5 exec/s: 23 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:07:05.635 [2024-12-12 10:09:19.027602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.635 [2024-12-12 10:09:19.027641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.635 [2024-12-12 10:09:19.027692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.635 [2024-12-12 10:09:19.027708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:05.635 #24 NEW cov: 12362 ft: 15514 corp: 23/62b lim: 5 exec/s: 24 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:07:05.635 [2024-12-12 10:09:19.077641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:05.635 [2024-12-12 10:09:19.077670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:05.635 #25 NEW cov: 12362 ft: 15525 corp: 24/63b lim: 5 exec/s: 12 rss: 75Mb L: 1/5 MS: 1 EraseBytes- 00:07:05.635 #25 DONE cov: 12362 ft: 15525 corp: 24/63b lim: 5 exec/s: 12 rss: 75Mb 00:07:05.635 Done 25 runs in 2 second(s) 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:05.635 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:05.895 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:05.895 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:05.895 10:09:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:05.895 [2024-12-12 10:09:19.301739] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:05.895 [2024-12-12 10:09:19.301812] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473716 ] 00:07:06.154 [2024-12-12 10:09:19.575175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.154 [2024-12-12 10:09:19.627488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.154 [2024-12-12 10:09:19.686346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.154 [2024-12-12 10:09:19.702671] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:06.154 INFO: Running with entropic power schedule (0xFF, 100). 00:07:06.154 INFO: Seed: 1851477364 00:07:06.154 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:06.154 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:06.154 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:06.154 INFO: A corpus is not provided, starting from an empty corpus 00:07:06.154 #2 INITED exec/s: 0 rss: 65Mb 00:07:06.154 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:06.154 This may also happen if the target rejected all inputs we tried so far 00:07:06.154 [2024-12-12 10:09:19.761430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.154 [2024-12-12 10:09:19.761458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.154 [2024-12-12 10:09:19.761518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.154 [2024-12-12 10:09:19.761532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.672 NEW_FUNC[1/716]: 0x448aa8 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:06.672 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.672 #30 NEW cov: 12158 ft: 12159 corp: 2/24b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:06.672 [2024-12-12 10:09:20.092935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.672 [2024-12-12 10:09:20.093024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.672 [2024-12-12 10:09:20.093150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.672 [2024-12-12 10:09:20.093191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.672 [2024-12-12 10:09:20.093312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.672 [2024-12-12 10:09:20.093350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.672 #31 NEW cov: 12271 ft: 13076 corp: 3/54b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 CrossOver- 00:07:06.672 [2024-12-12 10:09:20.162552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.672 [2024-12-12 10:09:20.162580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.672 [2024-12-12 10:09:20.162643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.672 [2024-12-12 10:09:20.162657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.672 [2024-12-12 10:09:20.162720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.672 [2024-12-12 10:09:20.162733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.672 #32 NEW cov: 12277 ft: 13401 corp: 4/82b lim: 40 exec/s: 0 rss: 73Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:07:06.672 [2024-12-12 10:09:20.202614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.202642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.673 [2024-12-12 10:09:20.202703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.202720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.673 [2024-12-12 10:09:20.202779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.202792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.673 #33 NEW cov: 12362 ft: 13616 corp: 5/112b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBit- 00:07:06.673 [2024-12-12 10:09:20.262816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.262841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.673 [2024-12-12 10:09:20.262903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.262917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.673 [2024-12-12 10:09:20.262976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.262989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.673 #36 NEW cov: 12362 ft: 13666 corp: 6/143b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:06.673 [2024-12-12 10:09:20.302924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.302949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.673 [2024-12-12 10:09:20.303011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.303024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.673 [2024-12-12 10:09:20.303079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.673 [2024-12-12 10:09:20.303093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.933 #37 NEW cov: 12362 ft: 13706 corp: 7/173b lim: 40 exec/s: 0 rss: 73Mb L: 30/31 MS: 1 ShuffleBytes- 00:07:06.933 [2024-12-12 10:09:20.363081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.363106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.363185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:9300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.363203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.363263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.363277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.933 #38 NEW cov: 12362 ft: 13808 corp: 8/202b lim: 40 exec/s: 0 rss: 73Mb L: 29/31 MS: 1 InsertByte- 00:07:06.933 [2024-12-12 10:09:20.423254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.423278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.423356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.423370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.423429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.423442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.933 #39 NEW cov: 12362 ft: 13899 corp: 9/232b lim: 40 exec/s: 0 rss: 73Mb L: 30/31 MS: 1 ShuffleBytes- 00:07:06.933 [2024-12-12 10:09:20.463354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00050000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.463379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.463440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.463454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.463510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.463523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:06.933 #40 NEW cov: 12362 ft: 13949 corp: 10/263b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 ChangeBinInt- 00:07:06.933 [2024-12-12 10:09:20.523402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.523427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.523490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.523504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.933 #45 NEW cov: 12362 ft: 14005 corp: 11/284b lim: 40 exec/s: 0 rss: 73Mb L: 21/31 MS: 5 CopyPart-ShuffleBytes-InsertByte-ChangeBit-CrossOver- 00:07:06.933 [2024-12-12 10:09:20.563623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.563647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.563731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.563745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:06.933 [2024-12-12 10:09:20.563806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:06.933 [2024-12-12 10:09:20.563818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.193 #46 NEW cov: 12362 ft: 14033 corp: 12/314b lim: 40 exec/s: 0 rss: 73Mb L: 30/31 MS: 1 ShuffleBytes- 00:07:07.193 [2024-12-12 10:09:20.603733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.603758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.603836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000041 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.603850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.603909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.603923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.193 #47 NEW cov: 12362 ft: 14061 corp: 13/344b lim: 40 exec/s: 0 rss: 73Mb L: 30/31 MS: 1 ChangeByte- 00:07:07.193 [2024-12-12 10:09:20.643863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.643887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.643950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.643963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.644036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00200100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.644050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.193 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:07.193 #48 NEW cov: 12385 ft: 14188 corp: 14/374b lim: 40 exec/s: 0 rss: 74Mb L: 30/31 MS: 1 ChangeBit- 00:07:07.193 [2024-12-12 10:09:20.703918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.703944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.704008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.704021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.193 #49 NEW cov: 12385 ft: 14234 corp: 15/393b lim: 40 exec/s: 49 rss: 74Mb L: 19/31 MS: 1 EraseBytes- 00:07:07.193 [2024-12-12 10:09:20.764312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.764340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.764403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.764416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.764476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.764489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.764551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.764564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.193 #50 NEW cov: 12385 ft: 14719 corp: 16/426b lim: 40 exec/s: 50 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:07.193 [2024-12-12 10:09:20.804288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.804312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.804375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.804389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.193 [2024-12-12 10:09:20.804449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.193 [2024-12-12 10:09:20.804462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.193 #51 NEW cov: 12385 ft: 14781 corp: 17/453b lim: 40 exec/s: 51 rss: 74Mb L: 27/33 MS: 1 CopyPart- 00:07:07.453 [2024-12-12 10:09:20.844549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.844573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.844636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.844649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.844709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.844727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.844784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.844798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.453 #52 NEW cov: 12385 ft: 14816 corp: 18/487b lim: 40 exec/s: 52 rss: 74Mb L: 34/34 MS: 1 CopyPart- 00:07:07.453 [2024-12-12 10:09:20.884519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.884546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.884608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.884621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.884683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.884696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.453 #53 NEW cov: 12385 ft: 14828 corp: 19/517b lim: 40 exec/s: 53 rss: 74Mb L: 30/34 MS: 1 ShuffleBytes- 00:07:07.453 [2024-12-12 10:09:20.924618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.924643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.924703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.924720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.924793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.924807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.453 #54 NEW cov: 12385 ft: 14831 corp: 20/544b lim: 40 exec/s: 54 rss: 74Mb L: 27/34 MS: 1 CrossOver- 00:07:07.453 [2024-12-12 10:09:20.984667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.984694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:20.984758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:20.984773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.453 #55 NEW cov: 12385 ft: 14884 corp: 21/563b lim: 40 exec/s: 55 rss: 74Mb L: 19/34 MS: 1 ChangeByte- 00:07:07.453 [2024-12-12 10:09:21.044993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00050000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:21.045018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:21.045082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:21.045095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.453 [2024-12-12 10:09:21.045159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.453 [2024-12-12 10:09:21.045172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.453 #56 NEW cov: 12385 ft: 14907 corp: 22/594b lim: 40 exec/s: 56 rss: 74Mb L: 31/34 MS: 1 ChangeByte- 00:07:07.712 [2024-12-12 10:09:21.105159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.105184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.105246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.105259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.105320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.105333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.712 #57 NEW cov: 12385 ft: 14920 corp: 23/625b lim: 40 exec/s: 57 rss: 74Mb L: 31/34 MS: 1 CrossOver- 00:07:07.712 [2024-12-12 10:09:21.145350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.145374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.145451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.145465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.145526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.145540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.145601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000bf4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.145614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.712 #58 NEW cov: 12385 ft: 14957 corp: 24/657b lim: 40 exec/s: 58 rss: 74Mb L: 32/34 MS: 1 InsertRepeatedBytes- 00:07:07.712 [2024-12-12 10:09:21.185342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.185366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.185429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:9300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.185442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.185518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:020000ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.185532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.712 #59 NEW cov: 12385 ft: 14969 corp: 25/686b lim: 40 exec/s: 59 rss: 74Mb L: 29/34 MS: 1 ChangeBinInt- 00:07:07.712 [2024-12-12 10:09:21.245639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.245668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.245733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:9300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.245747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.245807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.245820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.245881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.245893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.712 #60 NEW cov: 12385 ft: 14980 corp: 26/722b lim: 40 exec/s: 60 rss: 74Mb L: 36/36 MS: 1 CrossOver- 00:07:07.712 [2024-12-12 10:09:21.285513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:5f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.285538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.712 [2024-12-12 10:09:21.285615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.712 [2024-12-12 10:09:21.285629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.713 #61 NEW cov: 12385 ft: 15029 corp: 27/744b lim: 40 exec/s: 61 rss: 74Mb L: 22/36 MS: 1 InsertByte- 00:07:07.713 [2024-12-12 10:09:21.325871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.713 [2024-12-12 10:09:21.325896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.713 [2024-12-12 10:09:21.325953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:d13e22ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.713 [2024-12-12 10:09:21.325969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.713 [2024-12-12 10:09:21.326029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3c020000 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.713 [2024-12-12 10:09:21.326043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.713 [2024-12-12 10:09:21.326099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.713 [2024-12-12 10:09:21.326111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.713 #62 NEW cov: 12385 ft: 15036 corp: 28/780b lim: 40 exec/s: 62 rss: 74Mb L: 36/36 MS: 1 CMP- DE: "\016\321>\"\377<\002\000"- 00:07:07.973 [2024-12-12 10:09:21.366018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00d8d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.366044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.366100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.366116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.366173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.366186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.366241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.366254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.973 #63 NEW cov: 12385 ft: 15050 corp: 29/819b lim: 40 exec/s: 63 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:07.973 [2024-12-12 10:09:21.426063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.426088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.426149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:9300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.426163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.426220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.426233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.973 #64 NEW cov: 12385 ft: 15074 corp: 30/848b lim: 40 exec/s: 64 rss: 74Mb L: 29/39 MS: 1 ShuffleBytes- 00:07:07.973 [2024-12-12 10:09:21.466336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.466361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.466438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.466452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.466509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00009300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.466523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.466583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffff0200 cdw11:00ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.466596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.973 #65 NEW cov: 12385 ft: 15081 corp: 31/887b lim: 40 exec/s: 65 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:07:07.973 [2024-12-12 10:09:21.526479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.526505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.526568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ffff7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.526583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.526643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:99380ed9 cdw11:09000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.526656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.526719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.526733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:07.973 #66 NEW cov: 12385 ft: 15086 corp: 32/925b lim: 40 exec/s: 66 rss: 74Mb L: 38/39 MS: 1 CMP- DE: "\377\377~\2318\016\331\011"- 00:07:07.973 [2024-12-12 10:09:21.566475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.566500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.566575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.566589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.566650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.566664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:07.973 #67 NEW cov: 12385 ft: 15117 corp: 33/956b lim: 40 exec/s: 67 rss: 74Mb L: 31/39 MS: 1 CopyPart- 00:07:07.973 [2024-12-12 10:09:21.606605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:52000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.606631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.606692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.606706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:07.973 [2024-12-12 10:09:21.606769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:07.973 [2024-12-12 10:09:21.606783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.233 #70 NEW cov: 12385 ft: 15121 corp: 34/982b lim: 40 exec/s: 70 rss: 74Mb L: 26/39 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:08.233 [2024-12-12 10:09:21.646646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.646669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.234 [2024-12-12 10:09:21.646747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.646765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.234 [2024-12-12 10:09:21.646819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.646832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.234 #71 NEW cov: 12385 ft: 15140 corp: 35/1012b lim: 40 exec/s: 71 rss: 74Mb L: 30/39 MS: 1 ChangeByte- 00:07:08.234 [2024-12-12 10:09:21.686778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.686803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.234 [2024-12-12 10:09:21.686865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:9300ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.686878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.234 [2024-12-12 10:09:21.686935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.686948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.234 #72 NEW cov: 12385 ft: 15188 corp: 36/1042b lim: 40 exec/s: 72 rss: 74Mb L: 30/39 MS: 1 CopyPart- 00:07:08.234 [2024-12-12 10:09:21.727027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00050000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.727052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.234 [2024-12-12 10:09:21.727111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.727124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.234 [2024-12-12 10:09:21.727198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.727212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:08.234 [2024-12-12 10:09:21.727272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00cdcdcd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.234 [2024-12-12 10:09:21.727285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:08.234 #73 NEW cov: 12385 ft: 15198 corp: 37/1078b lim: 40 exec/s: 36 rss: 75Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:07:08.234 #73 DONE cov: 12385 ft: 15198 corp: 37/1078b lim: 40 exec/s: 36 rss: 75Mb 00:07:08.234 ###### Recommended dictionary. ###### 00:07:08.234 "\016\321>\"\377<\002\000" # Uses: 0 00:07:08.234 "\377\377~\2318\016\331\011" # Uses: 0 00:07:08.234 ###### End of recommended dictionary. ###### 00:07:08.234 Done 73 runs in 2 second(s) 00:07:08.234 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.493 10:09:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:08.493 [2024-12-12 10:09:21.921493] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:08.493 [2024-12-12 10:09:21.921565] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474246 ] 00:07:08.753 [2024-12-12 10:09:22.193842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.753 [2024-12-12 10:09:22.246084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.753 [2024-12-12 10:09:22.304984] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.753 [2024-12-12 10:09:22.321311] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:08.753 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.753 INFO: Seed: 177506885 00:07:08.753 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:08.753 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:08.753 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:08.753 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.753 #2 INITED exec/s: 0 rss: 65Mb 00:07:08.753 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.753 This may also happen if the target rejected all inputs we tried so far 00:07:08.753 [2024-12-12 10:09:22.387182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.753 [2024-12-12 10:09:22.387210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:08.753 [2024-12-12 10:09:22.387276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.753 [2024-12-12 10:09:22.387290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:08.753 [2024-12-12 10:09:22.387349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:08.753 [2024-12-12 10:09:22.387362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.272 NEW_FUNC[1/716]: 0x44a818 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:09.272 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:09.272 #8 NEW cov: 12150 ft: 12151 corp: 2/30b lim: 40 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:09.272 [2024-12-12 10:09:22.718102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dad2dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.718159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 [2024-12-12 10:09:22.718250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.718279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.272 [2024-12-12 10:09:22.718364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.718390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.272 NEW_FUNC[1/1]: 0x19cdb18 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1575 00:07:09.272 #9 NEW cov: 12282 ft: 12848 corp: 3/59b lim: 40 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 ChangeBit- 00:07:09.272 [2024-12-12 10:09:22.787621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:03ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.787647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 #13 NEW cov: 12288 ft: 13779 corp: 4/68b lim: 40 exec/s: 0 rss: 74Mb L: 9/29 MS: 4 ChangeBit-ChangeByte-CMP-CopyPart- DE: "\377\377\377\003"- 00:07:09.272 [2024-12-12 10:09:22.828051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.828077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 [2024-12-12 10:09:22.828135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.828148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.272 [2024-12-12 10:09:22.828205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.828218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.272 #14 NEW cov: 12373 ft: 14016 corp: 5/97b lim: 40 exec/s: 0 rss: 74Mb L: 29/29 MS: 1 ShuffleBytes- 00:07:09.272 [2024-12-12 10:09:22.868341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.868367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.272 [2024-12-12 10:09:22.868422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.868436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.272 [2024-12-12 10:09:22.868492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadaffff cdw11:ff03dada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.272 [2024-12-12 10:09:22.868505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.272 [2024-12-12 10:09:22.868561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.273 [2024-12-12 10:09:22.868573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.273 #15 NEW cov: 12373 ft: 14433 corp: 6/130b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 PersAutoDict- DE: "\377\377\377\003"- 00:07:09.532 [2024-12-12 10:09:22.928331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:22.928357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:22.928429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dada0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:22.928442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:22.928498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:001ddada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:22.928511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.532 #16 NEW cov: 12373 ft: 14537 corp: 7/159b lim: 40 exec/s: 0 rss: 74Mb L: 29/33 MS: 1 ChangeBinInt- 00:07:09.532 [2024-12-12 10:09:22.968586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:22.968611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:22.968669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dada0000 cdw11:ffffff03 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:22.968682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:22.968749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:001ddada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:22.968763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:22.968821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:22.968834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.532 #17 NEW cov: 12373 ft: 14610 corp: 8/192b lim: 40 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 PersAutoDict- DE: "\377\377\377\003"- 00:07:09.532 [2024-12-12 10:09:23.028790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.028822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:23.028881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dada30da cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.028895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:23.028955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadaff cdw11:ffff03da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.028969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:23.029025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.029038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.532 #18 NEW cov: 12373 ft: 14690 corp: 9/226b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertByte- 00:07:09.532 [2024-12-12 10:09:23.088493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:03ff31ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.088518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.532 #24 NEW cov: 12373 ft: 14728 corp: 10/235b lim: 40 exec/s: 0 rss: 74Mb L: 9/34 MS: 1 CopyPart- 00:07:09.532 [2024-12-12 10:09:23.149119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.149144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:23.149204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.149217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:23.149272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.532 [2024-12-12 10:09:23.149285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.532 [2024-12-12 10:09:23.149341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.533 [2024-12-12 10:09:23.149354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:09.792 #25 NEW cov: 12373 ft: 14775 corp: 11/269b lim: 40 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CopyPart- 00:07:09.792 [2024-12-12 10:09:23.189042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dad2dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.189067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.189093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.189106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.189163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dbdadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.189176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.792 #26 NEW cov: 12373 ft: 14807 corp: 12/298b lim: 40 exec/s: 0 rss: 74Mb L: 29/34 MS: 1 ChangeBit- 00:07:09.792 [2024-12-12 10:09:23.249056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.249085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.249143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.249156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.792 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:09.792 #27 NEW cov: 12396 ft: 15033 corp: 13/318b lim: 40 exec/s: 0 rss: 74Mb L: 20/34 MS: 1 EraseBytes- 00:07:09.792 [2024-12-12 10:09:23.289326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dad2dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.289352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.289410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:cadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.289423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.289480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.289493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.792 #28 NEW cov: 12396 ft: 15041 corp: 14/347b lim: 40 exec/s: 0 rss: 74Mb L: 29/34 MS: 1 ChangeBit- 00:07:09.792 [2024-12-12 10:09:23.329446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dad2dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.329471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.329530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.329543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.329598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dbdadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.329611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:09.792 #29 NEW cov: 12396 ft: 15142 corp: 15/377b lim: 40 exec/s: 29 rss: 75Mb L: 30/34 MS: 1 InsertByte- 00:07:09.792 [2024-12-12 10:09:23.389457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:03ffdad2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.389481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:09.792 [2024-12-12 10:09:23.389539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dada31ff cdw11:dadaffda SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:09.792 [2024-12-12 10:09:23.389553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.052 #30 NEW cov: 12396 ft: 15157 corp: 16/396b lim: 40 exec/s: 30 rss: 75Mb L: 19/34 MS: 1 CrossOver- 00:07:10.052 [2024-12-12 10:09:23.449622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.449647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.449707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadada5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.449727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.052 #31 NEW cov: 12396 ft: 15179 corp: 17/416b lim: 40 exec/s: 31 rss: 75Mb L: 20/34 MS: 1 ChangeBit- 00:07:10.052 [2024-12-12 10:09:23.509949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.509974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.510031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.510044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.510112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.510125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.052 #32 NEW cov: 12396 ft: 15190 corp: 18/446b lim: 40 exec/s: 32 rss: 75Mb L: 30/34 MS: 1 InsertByte- 00:07:10.052 [2024-12-12 10:09:23.550230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.550255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.550309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:ffffff03 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.550323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.550377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.550390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.550446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.550460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.052 #33 NEW cov: 12396 ft: 15243 corp: 19/480b lim: 40 exec/s: 33 rss: 75Mb L: 34/34 MS: 1 PersAutoDict- DE: "\377\377\377\003"- 00:07:10.052 [2024-12-12 10:09:23.610226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.610250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.610309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.610323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.610380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.610393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.052 #34 NEW cov: 12396 ft: 15325 corp: 20/510b lim: 40 exec/s: 34 rss: 75Mb L: 30/34 MS: 1 ShuffleBytes- 00:07:10.052 [2024-12-12 10:09:23.670411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dad2dada cdw11:91dadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.670435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.670494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.670507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.052 [2024-12-12 10:09:23.670562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadbdada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.052 [2024-12-12 10:09:23.670575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.312 #35 NEW cov: 12396 ft: 15348 corp: 21/540b lim: 40 exec/s: 35 rss: 75Mb L: 30/34 MS: 1 InsertByte- 00:07:10.312 [2024-12-12 10:09:23.710200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:03ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.710225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.312 #36 NEW cov: 12396 ft: 15360 corp: 22/552b lim: 40 exec/s: 36 rss: 75Mb L: 12/34 MS: 1 InsertRepeatedBytes- 00:07:10.312 [2024-12-12 10:09:23.750778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.750802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.312 [2024-12-12 10:09:23.750860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.750873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.312 [2024-12-12 10:09:23.750931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.750943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.312 [2024-12-12 10:09:23.751000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:0adadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.751013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.312 #37 NEW cov: 12396 ft: 15370 corp: 23/585b lim: 40 exec/s: 37 rss: 75Mb L: 33/34 MS: 1 CrossOver- 00:07:10.312 [2024-12-12 10:09:23.790422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:03ffdad2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.790446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.312 #38 NEW cov: 12396 ft: 15477 corp: 24/595b lim: 40 exec/s: 38 rss: 75Mb L: 10/34 MS: 1 EraseBytes- 00:07:10.312 [2024-12-12 10:09:23.850894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.850919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.312 [2024-12-12 10:09:23.850991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dada0000 cdw11:00ab0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.851008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.312 [2024-12-12 10:09:23.851065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:001ddada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.851079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.312 #39 NEW cov: 12396 ft: 15488 corp: 25/624b lim: 40 exec/s: 39 rss: 75Mb L: 29/34 MS: 1 ChangeByte- 00:07:10.312 [2024-12-12 10:09:23.890843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:03ffdad2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.890868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.312 [2024-12-12 10:09:23.890924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffdadaff cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.890938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.312 #40 NEW cov: 12396 ft: 15501 corp: 26/640b lim: 40 exec/s: 40 rss: 75Mb L: 16/34 MS: 1 EraseBytes- 00:07:10.312 [2024-12-12 10:09:23.930990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.931014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.312 [2024-12-12 10:09:23.931071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.312 [2024-12-12 10:09:23.931084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.571 #46 NEW cov: 12396 ft: 15520 corp: 27/658b lim: 40 exec/s: 46 rss: 75Mb L: 18/34 MS: 1 EraseBytes- 00:07:10.571 [2024-12-12 10:09:23.991030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3132ffff cdw11:03ffdad2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:23.991054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.571 #47 NEW cov: 12396 ft: 15549 corp: 28/668b lim: 40 exec/s: 47 rss: 75Mb L: 10/34 MS: 1 ChangeByte- 00:07:10.571 [2024-12-12 10:09:24.051337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:da2fdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.051362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.571 [2024-12-12 10:09:24.051420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadada5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.051433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.571 #48 NEW cov: 12396 ft: 15585 corp: 29/688b lim: 40 exec/s: 48 rss: 75Mb L: 20/34 MS: 1 ChangeByte- 00:07:10.571 [2024-12-12 10:09:24.111467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:da2fdada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.111492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.571 [2024-12-12 10:09:24.111551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadada5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.111564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.571 #49 NEW cov: 12396 ft: 15595 corp: 30/708b lim: 40 exec/s: 49 rss: 76Mb L: 20/34 MS: 1 ShuffleBytes- 00:07:10.571 [2024-12-12 10:09:24.171934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.171959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.571 [2024-12-12 10:09:24.172021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dada0000 cdw11:ffffff03 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.172034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.571 [2024-12-12 10:09:24.172091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:001ddada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.172105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.571 [2024-12-12 10:09:24.172161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:001ddada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.571 [2024-12-12 10:09:24.172174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.831 #50 NEW cov: 12396 ft: 15601 corp: 31/741b lim: 40 exec/s: 50 rss: 76Mb L: 33/34 MS: 1 CopyPart- 00:07:10.831 [2024-12-12 10:09:24.232148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dadada00 cdw11:0000dada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.831 [2024-12-12 10:09:24.232184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.831 [2024-12-12 10:09:24.232272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.831 [2024-12-12 10:09:24.232286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.831 [2024-12-12 10:09:24.232342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.831 [2024-12-12 10:09:24.232355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.831 [2024-12-12 10:09:24.232409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.831 [2024-12-12 10:09:24.232421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:10.831 #51 NEW cov: 12396 ft: 15614 corp: 32/778b lim: 40 exec/s: 51 rss: 76Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:10.832 [2024-12-12 10:09:24.271917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dad2dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.832 [2024-12-12 10:09:24.271942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.832 [2024-12-12 10:09:24.272016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadbda cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.832 [2024-12-12 10:09:24.272030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.832 #52 NEW cov: 12396 ft: 15625 corp: 33/797b lim: 40 exec/s: 52 rss: 76Mb L: 19/37 MS: 1 EraseBytes- 00:07:10.832 [2024-12-12 10:09:24.312016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:31ffffff cdw11:03ffdad2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.832 [2024-12-12 10:09:24.312043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.832 [2024-12-12 10:09:24.312100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dada31ff cdw11:dadaffda SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.832 [2024-12-12 10:09:24.312113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.832 #53 NEW cov: 12396 ft: 15639 corp: 34/816b lim: 40 exec/s: 53 rss: 76Mb L: 19/37 MS: 1 ShuffleBytes- 00:07:10.832 [2024-12-12 10:09:24.352288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:dad2dada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.832 [2024-12-12 10:09:24.352313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.832 [2024-12-12 10:09:24.352368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.832 [2024-12-12 10:09:24.352381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:10.832 [2024-12-12 10:09:24.352435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadada0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:10.832 [2024-12-12 10:09:24.352448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:10.832 #54 NEW cov: 12396 ft: 15642 corp: 35/840b lim: 40 exec/s: 27 rss: 76Mb L: 24/37 MS: 1 EraseBytes- 00:07:10.832 #54 DONE cov: 12396 ft: 15642 corp: 35/840b lim: 40 exec/s: 27 rss: 76Mb 00:07:10.832 ###### Recommended dictionary. ###### 00:07:10.832 "\377\377\377\003" # Uses: 3 00:07:10.832 ###### End of recommended dictionary. ###### 00:07:10.832 Done 54 runs in 2 second(s) 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.091 10:09:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:11.091 [2024-12-12 10:09:24.546870] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:11.091 [2024-12-12 10:09:24.546960] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474595 ] 00:07:11.350 [2024-12-12 10:09:24.821710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.350 [2024-12-12 10:09:24.876311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.350 [2024-12-12 10:09:24.935347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.350 [2024-12-12 10:09:24.951687] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:11.350 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.350 INFO: Seed: 2807521273 00:07:11.608 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:11.608 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:11.608 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:11.608 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.608 #2 INITED exec/s: 0 rss: 65Mb 00:07:11.608 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.608 This may also happen if the target rejected all inputs we tried so far 00:07:11.608 [2024-12-12 10:09:25.017500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.608 [2024-12-12 10:09:25.017531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.608 [2024-12-12 10:09:25.017608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.608 [2024-12-12 10:09:25.017623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.608 [2024-12-12 10:09:25.017682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.608 [2024-12-12 10:09:25.017695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.866 NEW_FUNC[1/717]: 0x44c588 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:11.866 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.866 #11 NEW cov: 12150 ft: 12151 corp: 2/27b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 4 ChangeBit-CrossOver-CopyPart-InsertRepeatedBytes- 00:07:11.866 [2024-12-12 10:09:25.348786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.348869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.866 [2024-12-12 10:09:25.348987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.349029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.866 [2024-12-12 10:09:25.349136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.349174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.866 #12 NEW cov: 12280 ft: 12922 corp: 3/53b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeBinInt- 00:07:11.866 [2024-12-12 10:09:25.418310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.418337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.866 [2024-12-12 10:09:25.418393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.418406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.866 [2024-12-12 10:09:25.418460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.418473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.866 #13 NEW cov: 12286 ft: 13220 corp: 4/79b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeBit- 00:07:11.866 [2024-12-12 10:09:25.458390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.458416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.866 [2024-12-12 10:09:25.458471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.458485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.866 [2024-12-12 10:09:25.458539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.458552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.866 #14 NEW cov: 12371 ft: 13490 corp: 5/105b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ShuffleBytes- 00:07:11.866 [2024-12-12 10:09:25.498226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:11.866 [2024-12-12 10:09:25.498251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.124 #15 NEW cov: 12371 ft: 14218 corp: 6/120b lim: 40 exec/s: 0 rss: 73Mb L: 15/26 MS: 1 EraseBytes- 00:07:12.124 [2024-12-12 10:09:25.558677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.124 [2024-12-12 10:09:25.558702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.124 [2024-12-12 10:09:25.558759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000008 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.124 [2024-12-12 10:09:25.558773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.124 [2024-12-12 10:09:25.558828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.124 [2024-12-12 10:09:25.558841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.125 #16 NEW cov: 12371 ft: 14262 corp: 7/146b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeBinInt- 00:07:12.125 [2024-12-12 10:09:25.598803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.598831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.125 [2024-12-12 10:09:25.598888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00008000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.598901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.125 [2024-12-12 10:09:25.598957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.598970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.125 #17 NEW cov: 12371 ft: 14342 corp: 8/172b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeBit- 00:07:12.125 [2024-12-12 10:09:25.658834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.658858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.125 [2024-12-12 10:09:25.658916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.658930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.125 #20 NEW cov: 12371 ft: 14556 corp: 9/192b lim: 40 exec/s: 0 rss: 73Mb L: 20/26 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:12.125 [2024-12-12 10:09:25.699060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.699084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.125 [2024-12-12 10:09:25.699136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.699149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.125 [2024-12-12 10:09:25.699205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.699218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.125 #21 NEW cov: 12371 ft: 14603 corp: 10/218b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeByte- 00:07:12.125 [2024-12-12 10:09:25.739170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.739194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.125 [2024-12-12 10:09:25.739250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:001a8000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.739263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.125 [2024-12-12 10:09:25.739318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.125 [2024-12-12 10:09:25.739331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.384 #22 NEW cov: 12371 ft: 14627 corp: 11/244b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeBinInt- 00:07:12.385 [2024-12-12 10:09:25.799337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.799362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.385 [2024-12-12 10:09:25.799419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.799432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.385 [2024-12-12 10:09:25.799486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.799499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.385 #23 NEW cov: 12371 ft: 14632 corp: 12/270b lim: 40 exec/s: 0 rss: 73Mb L: 26/26 MS: 1 ChangeByte- 00:07:12.385 [2024-12-12 10:09:25.839192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.839216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.385 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:12.385 #24 NEW cov: 12394 ft: 14692 corp: 13/285b lim: 40 exec/s: 0 rss: 74Mb L: 15/26 MS: 1 CopyPart- 00:07:12.385 [2024-12-12 10:09:25.899300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.899325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.385 #25 NEW cov: 12394 ft: 14717 corp: 14/300b lim: 40 exec/s: 0 rss: 74Mb L: 15/26 MS: 1 CopyPart- 00:07:12.385 [2024-12-12 10:09:25.939889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.939915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.385 [2024-12-12 10:09:25.939970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.939984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.385 [2024-12-12 10:09:25.940043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.940057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.385 [2024-12-12 10:09:25.940112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.940125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.385 #26 NEW cov: 12394 ft: 15068 corp: 15/332b lim: 40 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:12.385 [2024-12-12 10:09:25.979855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.979881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.385 [2024-12-12 10:09:25.979939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:001a8000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.979953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.385 [2024-12-12 10:09:25.980006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.385 [2024-12-12 10:09:25.980019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.385 #27 NEW cov: 12394 ft: 15122 corp: 16/358b lim: 40 exec/s: 27 rss: 74Mb L: 26/32 MS: 1 ShuffleBytes- 00:07:12.644 [2024-12-12 10:09:26.040171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.040196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.644 [2024-12-12 10:09:26.040253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00008000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.040266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.644 [2024-12-12 10:09:26.040320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.040333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.644 [2024-12-12 10:09:26.040389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.040402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.644 #28 NEW cov: 12394 ft: 15134 corp: 17/394b lim: 40 exec/s: 28 rss: 74Mb L: 36/36 MS: 1 CopyPart- 00:07:12.644 [2024-12-12 10:09:26.080140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.080165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.644 [2024-12-12 10:09:26.080238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:001a8000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.080252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.644 [2024-12-12 10:09:26.080307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:6c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.080320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.644 #34 NEW cov: 12394 ft: 15167 corp: 18/420b lim: 40 exec/s: 34 rss: 74Mb L: 26/36 MS: 1 ChangeByte- 00:07:12.644 [2024-12-12 10:09:26.140269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.140295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.644 [2024-12-12 10:09:26.140367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:001a8000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.140380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.644 [2024-12-12 10:09:26.140442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.140455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.644 #35 NEW cov: 12394 ft: 15183 corp: 19/446b lim: 40 exec/s: 35 rss: 74Mb L: 26/36 MS: 1 CopyPart- 00:07:12.644 [2024-12-12 10:09:26.180410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.644 [2024-12-12 10:09:26.180436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.645 [2024-12-12 10:09:26.180494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000060 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.180507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.645 [2024-12-12 10:09:26.180579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.180594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.645 #36 NEW cov: 12394 ft: 15194 corp: 20/472b lim: 40 exec/s: 36 rss: 74Mb L: 26/36 MS: 1 ChangeByte- 00:07:12.645 [2024-12-12 10:09:26.220688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.220712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.645 [2024-12-12 10:09:26.220771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00008000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.220784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.645 [2024-12-12 10:09:26.220838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.220851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.645 [2024-12-12 10:09:26.220907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:07000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.220920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:12.645 #37 NEW cov: 12394 ft: 15212 corp: 21/508b lim: 40 exec/s: 37 rss: 74Mb L: 36/36 MS: 1 ChangeBinInt- 00:07:12.645 [2024-12-12 10:09:26.280765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f8ffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.280790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.645 [2024-12-12 10:09:26.280848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:001a8000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.280861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.645 [2024-12-12 10:09:26.280918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.645 [2024-12-12 10:09:26.280931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.904 #38 NEW cov: 12394 ft: 15297 corp: 22/534b lim: 40 exec/s: 38 rss: 74Mb L: 26/36 MS: 1 ChangeBinInt- 00:07:12.904 [2024-12-12 10:09:26.320703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.320732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.904 [2024-12-12 10:09:26.320790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.320803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.904 #44 NEW cov: 12394 ft: 15335 corp: 23/554b lim: 40 exec/s: 44 rss: 74Mb L: 20/36 MS: 1 CrossOver- 00:07:12.904 [2024-12-12 10:09:26.380999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.381023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.904 [2024-12-12 10:09:26.381079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00280000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.381092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.904 [2024-12-12 10:09:26.381149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.381162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.904 #45 NEW cov: 12394 ft: 15341 corp: 24/580b lim: 40 exec/s: 45 rss: 74Mb L: 26/36 MS: 1 ChangeBit- 00:07:12.904 [2024-12-12 10:09:26.441164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.441188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.904 [2024-12-12 10:09:26.441242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00001a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.441256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.904 [2024-12-12 10:09:26.441326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.904 [2024-12-12 10:09:26.441340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.905 #46 NEW cov: 12394 ft: 15357 corp: 25/606b lim: 40 exec/s: 46 rss: 74Mb L: 26/36 MS: 1 ChangeBinInt- 00:07:12.905 [2024-12-12 10:09:26.481272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.905 [2024-12-12 10:09:26.481296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.905 [2024-12-12 10:09:26.481349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00600000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.905 [2024-12-12 10:09:26.481363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.905 [2024-12-12 10:09:26.481415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.905 [2024-12-12 10:09:26.481432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.905 #47 NEW cov: 12394 ft: 15382 corp: 26/633b lim: 40 exec/s: 47 rss: 74Mb L: 27/36 MS: 1 CrossOver- 00:07:12.905 [2024-12-12 10:09:26.541535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:20000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.905 [2024-12-12 10:09:26.541559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.905 [2024-12-12 10:09:26.541617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.905 [2024-12-12 10:09:26.541630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.905 [2024-12-12 10:09:26.541686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000002b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.905 [2024-12-12 10:09:26.541699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.164 #48 NEW cov: 12394 ft: 15397 corp: 27/659b lim: 40 exec/s: 48 rss: 74Mb L: 26/36 MS: 1 ChangeBit- 00:07:13.164 [2024-12-12 10:09:26.601788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.601813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.601884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.601898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.601953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.601966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.602021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.602034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.164 #49 NEW cov: 12394 ft: 15410 corp: 28/695b lim: 40 exec/s: 49 rss: 74Mb L: 36/36 MS: 1 CopyPart- 00:07:13.164 [2024-12-12 10:09:26.641866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.641890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.641948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00600000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.641961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.642015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.642028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.642079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:60000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.642095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.164 #50 NEW cov: 12394 ft: 15416 corp: 29/729b lim: 40 exec/s: 50 rss: 74Mb L: 34/36 MS: 1 CopyPart- 00:07:13.164 [2024-12-12 10:09:26.701751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.701776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.701831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00600000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.701844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.164 #51 NEW cov: 12394 ft: 15503 corp: 30/750b lim: 40 exec/s: 51 rss: 74Mb L: 21/36 MS: 1 EraseBytes- 00:07:13.164 [2024-12-12 10:09:26.742024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:80000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.742049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.742102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.742116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.164 [2024-12-12 10:09:26.742172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.742185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.164 #52 NEW cov: 12394 ft: 15524 corp: 31/776b lim: 40 exec/s: 52 rss: 74Mb L: 26/36 MS: 1 ChangeBit- 00:07:13.164 [2024-12-12 10:09:26.782325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.164 [2024-12-12 10:09:26.782350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.165 [2024-12-12 10:09:26.782405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.165 [2024-12-12 10:09:26.782418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.165 [2024-12-12 10:09:26.782477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.165 [2024-12-12 10:09:26.782490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.165 [2024-12-12 10:09:26.782544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:00070000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.165 [2024-12-12 10:09:26.782557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.424 #53 NEW cov: 12394 ft: 15536 corp: 32/812b lim: 40 exec/s: 53 rss: 75Mb L: 36/36 MS: 1 CrossOver- 00:07:13.424 [2024-12-12 10:09:26.842342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.424 [2024-12-12 10:09:26.842367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.424 [2024-12-12 10:09:26.842422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00280000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.424 [2024-12-12 10:09:26.842438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.424 [2024-12-12 10:09:26.842494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.424 [2024-12-12 10:09:26.842507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.424 #54 NEW cov: 12394 ft: 15557 corp: 33/838b lim: 40 exec/s: 54 rss: 75Mb L: 26/36 MS: 1 ShuffleBytes- 00:07:13.424 [2024-12-12 10:09:26.882120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.424 [2024-12-12 10:09:26.882145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.424 #55 NEW cov: 12394 ft: 15604 corp: 34/853b lim: 40 exec/s: 55 rss: 75Mb L: 15/36 MS: 1 ChangeBit- 00:07:13.424 [2024-12-12 10:09:26.942575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.424 [2024-12-12 10:09:26.942599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.425 [2024-12-12 10:09:26.942656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.425 [2024-12-12 10:09:26.942669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.425 [2024-12-12 10:09:26.942727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.425 [2024-12-12 10:09:26.942741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.425 #56 NEW cov: 12394 ft: 15612 corp: 35/883b lim: 40 exec/s: 56 rss: 75Mb L: 30/36 MS: 1 EraseBytes- 00:07:13.425 [2024-12-12 10:09:26.982382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.425 [2024-12-12 10:09:26.982406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.425 #57 NEW cov: 12394 ft: 15623 corp: 36/894b lim: 40 exec/s: 28 rss: 75Mb L: 11/36 MS: 1 EraseBytes- 00:07:13.425 #57 DONE cov: 12394 ft: 15623 corp: 36/894b lim: 40 exec/s: 28 rss: 75Mb 00:07:13.425 Done 57 runs in 2 second(s) 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.684 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.685 10:09:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:13.685 [2024-12-12 10:09:27.158223] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:13.685 [2024-12-12 10:09:27.158318] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475067 ] 00:07:13.944 [2024-12-12 10:09:27.430650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.944 [2024-12-12 10:09:27.488126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.944 [2024-12-12 10:09:27.547537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.944 [2024-12-12 10:09:27.563874] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:13.944 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.944 INFO: Seed: 1124593110 00:07:14.203 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:14.203 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:14.203 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:14.203 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.203 #2 INITED exec/s: 0 rss: 66Mb 00:07:14.203 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.203 This may also happen if the target rejected all inputs we tried so far 00:07:14.203 [2024-12-12 10:09:27.631119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.203 [2024-12-12 10:09:27.631157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.203 [2024-12-12 10:09:27.631249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.203 [2024-12-12 10:09:27.631266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.462 NEW_FUNC[1/716]: 0x44e158 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:14.462 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.462 #21 NEW cov: 12146 ft: 12147 corp: 2/23b lim: 40 exec/s: 0 rss: 73Mb L: 22/22 MS: 4 ChangeBit-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:14.462 [2024-12-12 10:09:27.991047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.462 [2024-12-12 10:09:27.991084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.462 [2024-12-12 10:09:27.991215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.462 [2024-12-12 10:09:27.991236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.462 #25 NEW cov: 12269 ft: 12841 corp: 3/42b lim: 40 exec/s: 0 rss: 73Mb L: 19/22 MS: 4 ShuffleBytes-InsertByte-InsertByte-InsertRepeatedBytes- 00:07:14.462 [2024-12-12 10:09:28.041275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff8cffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.462 [2024-12-12 10:09:28.041303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.462 [2024-12-12 10:09:28.041425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.462 [2024-12-12 10:09:28.041441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.462 #26 NEW cov: 12275 ft: 13037 corp: 4/62b lim: 40 exec/s: 0 rss: 73Mb L: 20/22 MS: 1 InsertByte- 00:07:14.721 [2024-12-12 10:09:28.101723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141495 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.721 [2024-12-12 10:09:28.101749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.721 [2024-12-12 10:09:28.101873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.721 [2024-12-12 10:09:28.101892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.721 [2024-12-12 10:09:28.102011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.721 [2024-12-12 10:09:28.102028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.721 #27 NEW cov: 12360 ft: 13474 corp: 5/90b lim: 40 exec/s: 0 rss: 73Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:14.721 [2024-12-12 10:09:28.171341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.721 [2024-12-12 10:09:28.171368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.721 #28 NEW cov: 12360 ft: 13951 corp: 6/104b lim: 40 exec/s: 0 rss: 74Mb L: 14/28 MS: 1 EraseBytes- 00:07:14.721 [2024-12-12 10:09:28.242484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.721 [2024-12-12 10:09:28.242511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.721 [2024-12-12 10:09:28.242638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39393939 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.721 [2024-12-12 10:09:28.242656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.722 [2024-12-12 10:09:28.242787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39393939 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.722 [2024-12-12 10:09:28.242805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.722 [2024-12-12 10:09:28.242940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:39393939 cdw11:39ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.722 [2024-12-12 10:09:28.242955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.722 [2024-12-12 10:09:28.243077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff0a2461 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.722 [2024-12-12 10:09:28.243096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.722 #29 NEW cov: 12360 ft: 14513 corp: 7/144b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:14.722 [2024-12-12 10:09:28.281689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.722 [2024-12-12 10:09:28.281719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.722 #35 NEW cov: 12360 ft: 14632 corp: 8/158b lim: 40 exec/s: 0 rss: 74Mb L: 14/40 MS: 1 ShuffleBytes- 00:07:14.722 [2024-12-12 10:09:28.352305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141495 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.722 [2024-12-12 10:09:28.352330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.722 [2024-12-12 10:09:28.352460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95143d14 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.722 [2024-12-12 10:09:28.352477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.722 [2024-12-12 10:09:28.352596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.722 [2024-12-12 10:09:28.352614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.981 #36 NEW cov: 12360 ft: 14659 corp: 9/187b lim: 40 exec/s: 0 rss: 74Mb L: 29/40 MS: 1 InsertByte- 00:07:14.981 [2024-12-12 10:09:28.422442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.422470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.981 [2024-12-12 10:09:28.422602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.422620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.981 #37 NEW cov: 12360 ft: 14701 corp: 10/209b lim: 40 exec/s: 0 rss: 74Mb L: 22/40 MS: 1 ChangeByte- 00:07:14.981 [2024-12-12 10:09:28.472656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.472682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.981 [2024-12-12 10:09:28.472810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.472827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.981 [2024-12-12 10:09:28.472945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.472963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.981 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:14.981 #38 NEW cov: 12383 ft: 14740 corp: 11/239b lim: 40 exec/s: 0 rss: 74Mb L: 30/40 MS: 1 InsertRepeatedBytes- 00:07:14.981 [2024-12-12 10:09:28.522670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff8c07ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.522696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.981 [2024-12-12 10:09:28.522831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.522848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.981 #39 NEW cov: 12383 ft: 14788 corp: 12/259b lim: 40 exec/s: 0 rss: 74Mb L: 20/40 MS: 1 ChangeBinInt- 00:07:14.981 [2024-12-12 10:09:28.573032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141495 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.573059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.981 [2024-12-12 10:09:28.573184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95143d14 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.573201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.981 [2024-12-12 10:09:28.573323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:47141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.981 [2024-12-12 10:09:28.573339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.981 #40 NEW cov: 12383 ft: 14808 corp: 13/289b lim: 40 exec/s: 40 rss: 74Mb L: 30/40 MS: 1 InsertByte- 00:07:15.240 [2024-12-12 10:09:28.643016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.643044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.240 [2024-12-12 10:09:28.643178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:141414ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.643195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.240 #41 NEW cov: 12383 ft: 14841 corp: 14/311b lim: 40 exec/s: 41 rss: 74Mb L: 22/40 MS: 1 CrossOver- 00:07:15.240 [2024-12-12 10:09:28.693117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff8c07ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.693143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.240 [2024-12-12 10:09:28.693273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.693291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.240 #42 NEW cov: 12383 ft: 14866 corp: 15/331b lim: 40 exec/s: 42 rss: 74Mb L: 20/40 MS: 1 ChangeByte- 00:07:15.240 [2024-12-12 10:09:28.753065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.753092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.240 #43 NEW cov: 12383 ft: 14881 corp: 16/345b lim: 40 exec/s: 43 rss: 74Mb L: 14/40 MS: 1 ChangeBinInt- 00:07:15.240 [2024-12-12 10:09:28.813658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141495 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.813685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.240 [2024-12-12 10:09:28.813818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141495 cdw11:143d1414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.813834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.240 [2024-12-12 10:09:28.813966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.240 [2024-12-12 10:09:28.813981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.240 #44 NEW cov: 12383 ft: 14943 corp: 17/374b lim: 40 exec/s: 44 rss: 74Mb L: 29/40 MS: 1 ShuffleBytes- 00:07:15.240 [2024-12-12 10:09:28.864230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.241 [2024-12-12 10:09:28.864256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.241 [2024-12-12 10:09:28.864373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.241 [2024-12-12 10:09:28.864391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.241 [2024-12-12 10:09:28.864516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.241 [2024-12-12 10:09:28.864532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.241 [2024-12-12 10:09:28.864652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.241 [2024-12-12 10:09:28.864669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.241 [2024-12-12 10:09:28.864859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff0a2461 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.241 [2024-12-12 10:09:28.864877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.500 #45 NEW cov: 12383 ft: 14960 corp: 18/414b lim: 40 exec/s: 45 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:15.500 [2024-12-12 10:09:28.913699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:28.913729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:28.913849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:141414ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:28.913869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.500 #46 NEW cov: 12383 ft: 14979 corp: 19/436b lim: 40 exec/s: 46 rss: 74Mb L: 22/40 MS: 1 ShuffleBytes- 00:07:15.500 [2024-12-12 10:09:28.984154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:d5141414 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:28.984185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:28.984324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95951414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:28.984342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:28.984468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:28.984485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.500 #47 NEW cov: 12383 ft: 15021 corp: 20/465b lim: 40 exec/s: 47 rss: 74Mb L: 29/40 MS: 1 InsertByte- 00:07:15.500 [2024-12-12 10:09:29.024336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.024363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:29.024492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.024511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:29.024629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.024649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.500 #48 NEW cov: 12383 ft: 15050 corp: 21/495b lim: 40 exec/s: 48 rss: 74Mb L: 30/40 MS: 1 ShuffleBytes- 00:07:15.500 [2024-12-12 10:09:29.084694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.084726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:29.084865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff141495 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.084882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:29.085008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95143d14 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.085034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.500 [2024-12-12 10:09:29.085153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.085171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.500 #49 NEW cov: 12383 ft: 15095 corp: 22/532b lim: 40 exec/s: 49 rss: 74Mb L: 37/40 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:15.500 [2024-12-12 10:09:29.124149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff060000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.500 [2024-12-12 10:09:29.124177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.759 #50 NEW cov: 12383 ft: 15109 corp: 23/546b lim: 40 exec/s: 50 rss: 74Mb L: 14/40 MS: 1 ChangeBinInt- 00:07:15.759 [2024-12-12 10:09:29.174674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.759 [2024-12-12 10:09:29.174700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.759 [2024-12-12 10:09:29.174831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.759 [2024-12-12 10:09:29.174848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.759 [2024-12-12 10:09:29.174967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1e000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.759 [2024-12-12 10:09:29.174985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.759 #51 NEW cov: 12383 ft: 15124 corp: 24/576b lim: 40 exec/s: 51 rss: 75Mb L: 30/40 MS: 1 ChangeBinInt- 00:07:15.759 [2024-12-12 10:09:29.244564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.759 [2024-12-12 10:09:29.244593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.759 #52 NEW cov: 12383 ft: 15150 corp: 25/590b lim: 40 exec/s: 52 rss: 75Mb L: 14/40 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:15.759 [2024-12-12 10:09:29.294990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14143614 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.759 [2024-12-12 10:09:29.295018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.760 [2024-12-12 10:09:29.295146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.760 [2024-12-12 10:09:29.295164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.760 #53 NEW cov: 12383 ft: 15171 corp: 26/613b lim: 40 exec/s: 53 rss: 75Mb L: 23/40 MS: 1 InsertByte- 00:07:15.760 [2024-12-12 10:09:29.345331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff060000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.760 [2024-12-12 10:09:29.345356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.760 [2024-12-12 10:09:29.345471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00ffff0a cdw11:2461ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.760 [2024-12-12 10:09:29.345487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.760 [2024-12-12 10:09:29.345609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.760 [2024-12-12 10:09:29.345625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.760 #54 NEW cov: 12383 ft: 15244 corp: 27/644b lim: 40 exec/s: 54 rss: 75Mb L: 31/40 MS: 1 InsertRepeatedBytes- 00:07:16.019 [2024-12-12 10:09:29.405623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1414ffff cdw11:ffffff8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.405651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.405795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.405817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.405956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:24611414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.405972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.019 #55 NEW cov: 12383 ft: 15271 corp: 28/674b lim: 40 exec/s: 55 rss: 75Mb L: 30/40 MS: 1 CrossOver- 00:07:16.019 [2024-12-12 10:09:29.475709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14141495 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.475739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.475856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:95143d14 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.475874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.475991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:47141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.476007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.019 #56 NEW cov: 12383 ft: 15278 corp: 29/704b lim: 40 exec/s: 56 rss: 75Mb L: 30/40 MS: 1 ShuffleBytes- 00:07:16.019 [2024-12-12 10:09:29.526122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:14ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.526148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.526274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff141495 cdw11:95959595 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.526290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.526418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:95ecc014 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.526435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.526564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:14141414 cdw11:14141414 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.526583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.019 #57 NEW cov: 12383 ft: 15304 corp: 30/741b lim: 40 exec/s: 57 rss: 75Mb L: 37/40 MS: 1 ChangeBinInt- 00:07:16.019 [2024-12-12 10:09:29.596499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.596526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.596650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:30303338 cdw11:37353832 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.596668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.596805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:30303139 cdw11:36383432 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.596822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.596941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:31323733 cdw11:34ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.596957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.019 [2024-12-12 10:09:29.597084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff0a2461 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.019 [2024-12-12 10:09:29.597101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.019 #58 NEW cov: 12383 ft: 15309 corp: 31/781b lim: 40 exec/s: 29 rss: 75Mb L: 40/40 MS: 1 ChangeASCIIInt- 00:07:16.019 #58 DONE cov: 12383 ft: 15309 corp: 31/781b lim: 40 exec/s: 29 rss: 75Mb 00:07:16.019 ###### Recommended dictionary. ###### 00:07:16.019 "\377\377\377\377\377\377\377\377" # Uses: 1 00:07:16.019 ###### End of recommended dictionary. ###### 00:07:16.019 Done 58 runs in 2 second(s) 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.279 10:09:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:16.279 [2024-12-12 10:09:29.791977] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:16.279 [2024-12-12 10:09:29.792041] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475607 ] 00:07:16.538 [2024-12-12 10:09:30.066985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.538 [2024-12-12 10:09:30.110563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.538 [2024-12-12 10:09:30.169850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.796 [2024-12-12 10:09:30.186185] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:16.797 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.797 INFO: Seed: 3747555947 00:07:16.797 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:16.797 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:16.797 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:16.797 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.797 #2 INITED exec/s: 0 rss: 65Mb 00:07:16.797 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.797 This may also happen if the target rejected all inputs we tried so far 00:07:16.797 [2024-12-12 10:09:30.251955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.797 [2024-12-12 10:09:30.251984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.797 [2024-12-12 10:09:30.252043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.797 [2024-12-12 10:09:30.252058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.797 [2024-12-12 10:09:30.252119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:16.797 [2024-12-12 10:09:30.252134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.056 NEW_FUNC[1/715]: 0x44fd28 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:17.056 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.056 #6 NEW cov: 12108 ft: 12133 corp: 2/27b lim: 35 exec/s: 0 rss: 73Mb L: 26/26 MS: 4 CopyPart-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:17.056 [2024-12-12 10:09:30.593154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.593210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 10:09:30.593297] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.593324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 10:09:30.593408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.593433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 10:09:30.593517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.593542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.056 NEW_FUNC[1/2]: 0x1c59dc8 in reactor_post_process_lw_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:922 00:07:17.056 NEW_FUNC[2/2]: 0x1c5e458 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:952 00:07:17.056 #7 NEW cov: 12262 ft: 13026 corp: 3/59b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:17.056 [2024-12-12 10:09:30.663023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.663052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 10:09:30.663126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.663141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 10:09:30.663200] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.663213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.056 [2024-12-12 10:09:30.663270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.056 [2024-12-12 10:09:30.663283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.056 #8 NEW cov: 12268 ft: 13338 corp: 4/92b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:17.315 [2024-12-12 10:09:30.702868] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.315 [2024-12-12 10:09:30.702894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.315 [2024-12-12 10:09:30.702968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.315 [2024-12-12 10:09:30.702983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.315 #9 NEW cov: 12353 ft: 13821 corp: 5/112b lim: 35 exec/s: 0 rss: 73Mb L: 20/33 MS: 1 CrossOver- 00:07:17.315 [2024-12-12 10:09:30.743074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.315 [2024-12-12 10:09:30.743100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.315 [2024-12-12 10:09:30.743160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.315 [2024-12-12 10:09:30.743174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.315 [2024-12-12 10:09:30.743230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.315 [2024-12-12 10:09:30.743244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.315 #15 NEW cov: 12353 ft: 13890 corp: 6/138b lim: 35 exec/s: 0 rss: 73Mb L: 26/33 MS: 1 ShuffleBytes- 00:07:17.316 [2024-12-12 10:09:30.783310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.783335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.783409] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.783423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.783478] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.783492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.783553] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.783567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.316 #16 NEW cov: 12353 ft: 13929 corp: 7/170b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:17.316 [2024-12-12 10:09:30.843469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.843496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.843557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.843571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.843625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.843639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.843692] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.843706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.316 #17 NEW cov: 12353 ft: 13993 corp: 8/202b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 ShuffleBytes- 00:07:17.316 [2024-12-12 10:09:30.903492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.903517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.903577] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.903591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.903647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.903660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.316 #18 NEW cov: 12353 ft: 14095 corp: 9/228b lim: 35 exec/s: 0 rss: 73Mb L: 26/33 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:17.316 [2024-12-12 10:09:30.943664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.943689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.943769] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.943783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.316 [2024-12-12 10:09:30.943839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.316 [2024-12-12 10:09:30.943852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.575 #19 NEW cov: 12353 ft: 14112 corp: 10/254b lim: 35 exec/s: 0 rss: 73Mb L: 26/33 MS: 1 CopyPart- 00:07:17.575 [2024-12-12 10:09:31.003835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.003864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.575 [2024-12-12 10:09:31.003923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.003936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.575 [2024-12-12 10:09:31.003993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.004006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.575 #20 NEW cov: 12353 ft: 14211 corp: 11/275b lim: 35 exec/s: 0 rss: 74Mb L: 21/33 MS: 1 CrossOver- 00:07:17.575 [2024-12-12 10:09:31.063797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.063822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.575 [2024-12-12 10:09:31.063880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.063894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.575 #21 NEW cov: 12353 ft: 14255 corp: 12/295b lim: 35 exec/s: 0 rss: 74Mb L: 20/33 MS: 1 ChangeByte- 00:07:17.575 [2024-12-12 10:09:31.104045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.104071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.575 [2024-12-12 10:09:31.104128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.104141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.575 [2024-12-12 10:09:31.104214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.104228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.575 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:17.575 #22 NEW cov: 12376 ft: 14316 corp: 13/321b lim: 35 exec/s: 0 rss: 74Mb L: 26/33 MS: 1 ShuffleBytes- 00:07:17.575 [2024-12-12 10:09:31.164211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.164236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.575 [2024-12-12 10:09:31.164294] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.164307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.575 [2024-12-12 10:09:31.164363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.575 [2024-12-12 10:09:31.164376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.575 #23 NEW cov: 12376 ft: 14337 corp: 14/347b lim: 35 exec/s: 0 rss: 74Mb L: 26/33 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:17.835 [2024-12-12 10:09:31.224363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.224391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.224451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.224465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.224524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.224536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.835 #24 NEW cov: 12376 ft: 14383 corp: 15/373b lim: 35 exec/s: 24 rss: 74Mb L: 26/33 MS: 1 ChangeBinInt- 00:07:17.835 [2024-12-12 10:09:31.264654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.264680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.264740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.264754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.264811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.264825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.264883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.264896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.835 #25 NEW cov: 12376 ft: 14395 corp: 16/405b lim: 35 exec/s: 25 rss: 74Mb L: 32/33 MS: 1 ChangeBinInt- 00:07:17.835 [2024-12-12 10:09:31.304789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.304815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.304875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.304888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.304946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.304959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.305016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.305029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.835 #26 NEW cov: 12376 ft: 14464 corp: 17/437b lim: 35 exec/s: 26 rss: 74Mb L: 32/33 MS: 1 ChangeBit- 00:07:17.835 [2024-12-12 10:09:31.344669] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.344695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.344770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.344792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.344847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.344863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.835 #27 NEW cov: 12383 ft: 14484 corp: 18/463b lim: 35 exec/s: 27 rss: 74Mb L: 26/33 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:17.835 [2024-12-12 10:09:31.384658] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.384683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.384742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.384755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.835 #28 NEW cov: 12383 ft: 14507 corp: 19/479b lim: 35 exec/s: 28 rss: 74Mb L: 16/33 MS: 1 EraseBytes- 00:07:17.835 [2024-12-12 10:09:31.444992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.445017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.445094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.445108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.835 [2024-12-12 10:09:31.445167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:17.835 [2024-12-12 10:09:31.445180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.835 #29 NEW cov: 12383 ft: 14514 corp: 20/502b lim: 35 exec/s: 29 rss: 74Mb L: 23/33 MS: 1 CrossOver- 00:07:18.094 [2024-12-12 10:09:31.485242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.094 [2024-12-12 10:09:31.485266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.094 [2024-12-12 10:09:31.485340] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.094 [2024-12-12 10:09:31.485354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.094 [2024-12-12 10:09:31.485413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.094 [2024-12-12 10:09:31.485427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.094 [2024-12-12 10:09:31.485486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.094 [2024-12-12 10:09:31.485499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.094 #30 NEW cov: 12386 ft: 14704 corp: 21/534b lim: 35 exec/s: 30 rss: 74Mb L: 32/33 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:18.094 [2024-12-12 10:09:31.545241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.094 [2024-12-12 10:09:31.545266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.545345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.545359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.545425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.545438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.095 #31 NEW cov: 12386 ft: 14714 corp: 22/560b lim: 35 exec/s: 31 rss: 74Mb L: 26/33 MS: 1 ChangeBit- 00:07:18.095 [2024-12-12 10:09:31.605571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.605595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.605655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.605669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.605729] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.605742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.605801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.605814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.095 #32 NEW cov: 12386 ft: 14725 corp: 23/593b lim: 35 exec/s: 32 rss: 74Mb L: 33/33 MS: 1 ChangeBit- 00:07:18.095 [2024-12-12 10:09:31.665582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.665607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.665667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.665681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.665760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.665775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.095 #33 NEW cov: 12386 ft: 14749 corp: 24/619b lim: 35 exec/s: 33 rss: 74Mb L: 26/33 MS: 1 CrossOver- 00:07:18.095 [2024-12-12 10:09:31.705850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.705875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.705935] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.705948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.706009] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.706021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.095 [2024-12-12 10:09:31.706084] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.095 [2024-12-12 10:09:31.706098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.354 #34 NEW cov: 12386 ft: 14770 corp: 25/652b lim: 35 exec/s: 34 rss: 74Mb L: 33/33 MS: 1 ChangeBinInt- 00:07:18.354 [2024-12-12 10:09:31.765903] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.354 [2024-12-12 10:09:31.765928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.354 [2024-12-12 10:09:31.765988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.354 [2024-12-12 10:09:31.766002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.354 [2024-12-12 10:09:31.766063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.354 [2024-12-12 10:09:31.766076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.354 #35 NEW cov: 12386 ft: 14806 corp: 26/678b lim: 35 exec/s: 35 rss: 74Mb L: 26/33 MS: 1 ChangeByte- 00:07:18.354 [2024-12-12 10:09:31.826225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.354 [2024-12-12 10:09:31.826250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.354 [2024-12-12 10:09:31.826324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.354 [2024-12-12 10:09:31.826339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.354 [2024-12-12 10:09:31.826400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.826414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.826474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.826488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.355 #36 NEW cov: 12386 ft: 14883 corp: 27/706b lim: 35 exec/s: 36 rss: 75Mb L: 28/33 MS: 1 CopyPart- 00:07:18.355 [2024-12-12 10:09:31.886357] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.886383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.886444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.886458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.886517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.886530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.886590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.886603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.355 #37 NEW cov: 12386 ft: 14893 corp: 28/739b lim: 35 exec/s: 37 rss: 75Mb L: 33/33 MS: 1 ShuffleBytes- 00:07:18.355 [2024-12-12 10:09:31.926167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.926192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.926251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.926264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.355 #38 NEW cov: 12386 ft: 14915 corp: 29/755b lim: 35 exec/s: 38 rss: 75Mb L: 16/33 MS: 1 EraseBytes- 00:07:18.355 [2024-12-12 10:09:31.986646] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.986672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.986737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.986751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.986812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.986825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.355 [2024-12-12 10:09:31.986881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.355 [2024-12-12 10:09:31.986894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.614 #39 NEW cov: 12386 ft: 14970 corp: 30/787b lim: 35 exec/s: 39 rss: 75Mb L: 32/33 MS: 1 ChangeByte- 00:07:18.614 [2024-12-12 10:09:32.046665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.046691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.614 [2024-12-12 10:09:32.046754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.046768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.614 [2024-12-12 10:09:32.046823] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.046836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.614 #40 NEW cov: 12386 ft: 14980 corp: 31/813b lim: 35 exec/s: 40 rss: 75Mb L: 26/33 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:18.614 [2024-12-12 10:09:32.106727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.106753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.614 [2024-12-12 10:09:32.106809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.106823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.614 #41 NEW cov: 12386 ft: 14992 corp: 32/830b lim: 35 exec/s: 41 rss: 75Mb L: 17/33 MS: 1 InsertByte- 00:07:18.614 [2024-12-12 10:09:32.167013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.167038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.614 [2024-12-12 10:09:32.167095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.167108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.614 [2024-12-12 10:09:32.167164] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.167178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.614 #42 NEW cov: 12386 ft: 14997 corp: 33/854b lim: 35 exec/s: 42 rss: 75Mb L: 24/33 MS: 1 EraseBytes- 00:07:18.614 [2024-12-12 10:09:32.207122] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.207147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.614 [2024-12-12 10:09:32.207220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.207234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.614 [2024-12-12 10:09:32.207291] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:18.614 [2024-12-12 10:09:32.207304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.614 #43 NEW cov: 12386 ft: 15018 corp: 34/878b lim: 35 exec/s: 21 rss: 75Mb L: 24/33 MS: 1 ShuffleBytes- 00:07:18.614 #43 DONE cov: 12386 ft: 15018 corp: 34/878b lim: 35 exec/s: 21 rss: 75Mb 00:07:18.614 ###### Recommended dictionary. ###### 00:07:18.614 "\377\377\377\377" # Uses: 5 00:07:18.614 ###### End of recommended dictionary. ###### 00:07:18.614 Done 43 runs in 2 second(s) 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.873 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.874 10:09:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:18.874 [2024-12-12 10:09:32.400242] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:18.874 [2024-12-12 10:09:32.400314] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476041 ] 00:07:19.133 [2024-12-12 10:09:32.679020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.133 [2024-12-12 10:09:32.738463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.391 [2024-12-12 10:09:32.797625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.391 [2024-12-12 10:09:32.813957] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:19.391 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.391 INFO: Seed: 2078589123 00:07:19.391 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:19.391 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:19.391 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:19.391 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.391 #2 INITED exec/s: 0 rss: 65Mb 00:07:19.391 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.391 This may also happen if the target rejected all inputs we tried so far 00:07:19.391 [2024-12-12 10:09:32.873831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.391 [2024-12-12 10:09:32.873860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.391 [2024-12-12 10:09:32.873918] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.391 [2024-12-12 10:09:32.873932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.391 [2024-12-12 10:09:32.873988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.391 [2024-12-12 10:09:32.874002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.391 [2024-12-12 10:09:32.874058] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.391 [2024-12-12 10:09:32.874071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:19.651 NEW_FUNC[1/717]: 0x451268 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:19.651 NEW_FUNC[2/717]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:19.651 #23 NEW cov: 12152 ft: 12153 corp: 2/36b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:19.651 [2024-12-12 10:09:33.214782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.214821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.651 [2024-12-12 10:09:33.214893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.214911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.651 [2024-12-12 10:09:33.214979] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.214996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.651 [2024-12-12 10:09:33.215063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.215080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:19.651 #24 NEW cov: 12265 ft: 12788 corp: 3/71b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:19.651 [2024-12-12 10:09:33.274775] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.274801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.651 [2024-12-12 10:09:33.274857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.274870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.651 [2024-12-12 10:09:33.274929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.274943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.651 [2024-12-12 10:09:33.274999] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.651 [2024-12-12 10:09:33.275012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:19.911 #25 NEW cov: 12271 ft: 12963 corp: 4/106b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:19.911 [2024-12-12 10:09:33.334773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.334798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.334856] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.334870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.334927] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.334940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.911 #26 NEW cov: 12356 ft: 13352 corp: 5/138b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 EraseBytes- 00:07:19.911 [2024-12-12 10:09:33.394637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.394663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.394724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.394738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.911 #29 NEW cov: 12356 ft: 13956 corp: 6/158b lim: 35 exec/s: 0 rss: 73Mb L: 20/35 MS: 3 CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:19.911 [2024-12-12 10:09:33.435069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.435093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.435151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.435164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.435222] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.435236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.911 #30 NEW cov: 12356 ft: 14025 corp: 7/190b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 ChangeByte- 00:07:19.911 [2024-12-12 10:09:33.495400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.495425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.495501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.495515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.495600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.495614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:19.911 [2024-12-12 10:09:33.495671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.495685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:19.911 #31 NEW cov: 12356 ft: 14173 corp: 8/225b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:07:19.911 [2024-12-12 10:09:33.535088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:19.911 [2024-12-12 10:09:33.535112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.170 #32 NEW cov: 12356 ft: 14312 corp: 9/244b lim: 35 exec/s: 0 rss: 73Mb L: 19/35 MS: 1 EraseBytes- 00:07:20.170 [2024-12-12 10:09:33.595451] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.170 [2024-12-12 10:09:33.595476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.595536] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.595549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.595608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.595621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.595680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.595696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.171 #35 NEW cov: 12356 ft: 14359 corp: 10/277b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:07:20.171 [2024-12-12 10:09:33.635566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.635592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.635650] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.635664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.635721] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.635735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.635794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.635806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.171 #36 NEW cov: 12356 ft: 14402 corp: 11/310b lim: 35 exec/s: 0 rss: 73Mb L: 33/35 MS: 1 ChangeBit- 00:07:20.171 [2024-12-12 10:09:33.695952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.695978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.696037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.696051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.696110] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.696124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.696181] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.696195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.171 #37 NEW cov: 12356 ft: 14420 corp: 12/345b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:07:20.171 [2024-12-12 10:09:33.735839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.735864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.735924] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.735937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.735996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.736009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.736074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.736087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.171 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:20.171 #38 NEW cov: 12379 ft: 14542 corp: 13/375b lim: 35 exec/s: 0 rss: 74Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:07:20.171 [2024-12-12 10:09:33.796010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.796035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.796098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.796111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.796184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.796198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.171 [2024-12-12 10:09:33.796258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.171 [2024-12-12 10:09:33.796272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.430 #39 NEW cov: 12379 ft: 14597 corp: 14/405b lim: 35 exec/s: 0 rss: 74Mb L: 30/35 MS: 1 ChangeBit- 00:07:20.430 [2024-12-12 10:09:33.856182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.430 [2024-12-12 10:09:33.856207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.430 [2024-12-12 10:09:33.856268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.430 [2024-12-12 10:09:33.856282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.430 [2024-12-12 10:09:33.856339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.430 [2024-12-12 10:09:33.856352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.430 [2024-12-12 10:09:33.856413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.430 [2024-12-12 10:09:33.856426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.430 #40 NEW cov: 12379 ft: 14612 corp: 15/434b lim: 35 exec/s: 40 rss: 74Mb L: 29/35 MS: 1 EraseBytes- 00:07:20.430 [2024-12-12 10:09:33.896278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.430 [2024-12-12 10:09:33.896303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.430 [2024-12-12 10:09:33.896365] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:33.896378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:33.896438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:33.896457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:33.896519] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:33.896532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.431 #41 NEW cov: 12379 ft: 14655 corp: 16/468b lim: 35 exec/s: 41 rss: 74Mb L: 34/35 MS: 1 InsertByte- 00:07:20.431 [2024-12-12 10:09:33.956647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:33.956672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:33.956731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:33.956745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:33.956804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:33.956817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:33.956876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:33.956889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.431 #42 NEW cov: 12379 ft: 14691 corp: 17/503b lim: 35 exec/s: 42 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:20.431 [2024-12-12 10:09:34.016635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:34.016660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:34.016724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:34.016738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:34.016800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:34.016813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.431 [2024-12-12 10:09:34.016874] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:34.016888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.431 #43 NEW cov: 12379 ft: 14696 corp: 18/536b lim: 35 exec/s: 43 rss: 74Mb L: 33/35 MS: 1 ChangeByte- 00:07:20.431 [2024-12-12 10:09:34.056495] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.431 [2024-12-12 10:09:34.056521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.690 #44 NEW cov: 12379 ft: 14709 corp: 19/552b lim: 35 exec/s: 44 rss: 74Mb L: 16/35 MS: 1 EraseBytes- 00:07:20.690 [2024-12-12 10:09:34.096862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.096887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.096965] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.096979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.097039] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.097052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.097113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.097126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.690 #45 NEW cov: 12379 ft: 14732 corp: 20/585b lim: 35 exec/s: 45 rss: 74Mb L: 33/35 MS: 1 ChangeByte- 00:07:20.690 [2024-12-12 10:09:34.157029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.157055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.157118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.157132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.157192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.157205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.157264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.157277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.690 #46 NEW cov: 12379 ft: 14733 corp: 21/618b lim: 35 exec/s: 46 rss: 74Mb L: 33/35 MS: 1 CopyPart- 00:07:20.690 [2024-12-12 10:09:34.197138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.197162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.197238] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.197252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.197311] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.197324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.690 [2024-12-12 10:09:34.197384] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.690 [2024-12-12 10:09:34.197398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.691 #47 NEW cov: 12379 ft: 14776 corp: 22/651b lim: 35 exec/s: 47 rss: 74Mb L: 33/35 MS: 1 ChangeBit- 00:07:20.691 [2024-12-12 10:09:34.237302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.237326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.691 [2024-12-12 10:09:34.237389] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.237402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.691 [2024-12-12 10:09:34.237458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.237471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.691 #48 NEW cov: 12379 ft: 14805 corp: 23/685b lim: 35 exec/s: 48 rss: 74Mb L: 34/35 MS: 1 CrossOver- 00:07:20.691 [2024-12-12 10:09:34.277432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.277456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.691 [2024-12-12 10:09:34.277515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.277529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.691 [2024-12-12 10:09:34.277589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.277603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.691 #49 NEW cov: 12379 ft: 14813 corp: 24/717b lim: 35 exec/s: 49 rss: 74Mb L: 32/35 MS: 1 ChangeBinInt- 00:07:20.691 [2024-12-12 10:09:34.317676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.317700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.691 [2024-12-12 10:09:34.317760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.317774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.691 [2024-12-12 10:09:34.317846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.317860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.691 [2024-12-12 10:09:34.317919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.691 [2024-12-12 10:09:34.317932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.950 #50 NEW cov: 12379 ft: 14850 corp: 25/752b lim: 35 exec/s: 50 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:07:20.950 [2024-12-12 10:09:34.357783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.357808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.950 [2024-12-12 10:09:34.357862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.357876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.950 [2024-12-12 10:09:34.357933] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.357949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.950 [2024-12-12 10:09:34.358005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.358018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:20.950 #51 NEW cov: 12379 ft: 14899 corp: 26/787b lim: 35 exec/s: 51 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:07:20.950 [2024-12-12 10:09:34.397455] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.397481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.950 [2024-12-12 10:09:34.397538] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.397552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.950 #52 NEW cov: 12379 ft: 14916 corp: 27/807b lim: 35 exec/s: 52 rss: 74Mb L: 20/35 MS: 1 ChangeBinInt- 00:07:20.950 [2024-12-12 10:09:34.437596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.437620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.950 [2024-12-12 10:09:34.437679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.437693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.950 #53 NEW cov: 12379 ft: 14932 corp: 28/824b lim: 35 exec/s: 53 rss: 74Mb L: 17/35 MS: 1 EraseBytes- 00:07:20.950 [2024-12-12 10:09:34.497743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.497767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.950 [2024-12-12 10:09:34.497827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.497841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.950 #54 NEW cov: 12379 ft: 14986 corp: 29/844b lim: 35 exec/s: 54 rss: 74Mb L: 20/35 MS: 1 EraseBytes- 00:07:20.950 [2024-12-12 10:09:34.537742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.537766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.950 #59 NEW cov: 12379 ft: 15211 corp: 30/851b lim: 35 exec/s: 59 rss: 74Mb L: 7/35 MS: 5 ChangeBit-InsertByte-EraseBytes-InsertByte-InsertRepeatedBytes- 00:07:20.950 [2024-12-12 10:09:34.577983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.578007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.950 [2024-12-12 10:09:34.578067] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:20.950 [2024-12-12 10:09:34.578080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.210 #60 NEW cov: 12379 ft: 15224 corp: 31/871b lim: 35 exec/s: 60 rss: 74Mb L: 20/35 MS: 1 ChangeBinInt- 00:07:21.210 [2024-12-12 10:09:34.638318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.638345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.638403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.638416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.638475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.638487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.210 #61 NEW cov: 12379 ft: 15357 corp: 32/892b lim: 35 exec/s: 61 rss: 74Mb L: 21/35 MS: 1 InsertByte- 00:07:21.210 [2024-12-12 10:09:34.678634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.678658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.678719] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.678732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.678787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.678800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.678857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.678870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.678928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.678941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.210 #62 NEW cov: 12379 ft: 15431 corp: 33/927b lim: 35 exec/s: 62 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:21.210 [2024-12-12 10:09:34.718631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000065 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.718655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.718735] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.718749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.718807] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.718820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.718879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.718894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.210 #63 NEW cov: 12379 ft: 15486 corp: 34/960b lim: 35 exec/s: 63 rss: 74Mb L: 33/35 MS: 1 CMP- DE: "\001\000\000\177"- 00:07:21.210 [2024-12-12 10:09:34.758725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.758749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.758810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.758824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.758881] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.758894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.758952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000077f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.758965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.210 #64 NEW cov: 12379 ft: 15495 corp: 35/991b lim: 35 exec/s: 64 rss: 74Mb L: 31/35 MS: 1 InsertByte- 00:07:21.210 [2024-12-12 10:09:34.819108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.819133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.819194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.819208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.819265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.819278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.210 [2024-12-12 10:09:34.819339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.210 [2024-12-12 10:09:34.819352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.469 #65 NEW cov: 12379 ft: 15506 corp: 36/1026b lim: 35 exec/s: 32 rss: 75Mb L: 35/35 MS: 1 InsertByte- 00:07:21.470 #65 DONE cov: 12379 ft: 15506 corp: 36/1026b lim: 35 exec/s: 32 rss: 75Mb 00:07:21.470 ###### Recommended dictionary. ###### 00:07:21.470 "\001\000\000\177" # Uses: 0 00:07:21.470 ###### End of recommended dictionary. ###### 00:07:21.470 Done 65 runs in 2 second(s) 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.470 10:09:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:21.470 [2024-12-12 10:09:35.011153] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:21.470 [2024-12-12 10:09:35.011227] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476428 ] 00:07:21.728 [2024-12-12 10:09:35.292871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.728 [2024-12-12 10:09:35.338723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.986 [2024-12-12 10:09:35.398051] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.986 [2024-12-12 10:09:35.414388] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:21.986 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.986 INFO: Seed: 385620222 00:07:21.986 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:21.986 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:21.986 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:21.986 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.986 #2 INITED exec/s: 0 rss: 65Mb 00:07:21.986 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.986 This may also happen if the target rejected all inputs we tried so far 00:07:21.986 [2024-12-12 10:09:35.481933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.986 [2024-12-12 10:09:35.481975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:21.986 [2024-12-12 10:09:35.482075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.986 [2024-12-12 10:09:35.482096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:21.986 [2024-12-12 10:09:35.482177] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.986 [2024-12-12 10:09:35.482196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.245 NEW_FUNC[1/717]: 0x452728 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:22.245 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:22.245 #12 NEW cov: 12242 ft: 12243 corp: 2/66b lim: 105 exec/s: 0 rss: 72Mb L: 65/65 MS: 5 CrossOver-EraseBytes-CrossOver-CopyPart-InsertRepeatedBytes- 00:07:22.245 [2024-12-12 10:09:35.821910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.245 [2024-12-12 10:09:35.821962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.245 [2024-12-12 10:09:35.822100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.245 [2024-12-12 10:09:35.822127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.245 [2024-12-12 10:09:35.822276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.245 [2024-12-12 10:09:35.822305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.245 #13 NEW cov: 12355 ft: 12996 corp: 3/139b lim: 105 exec/s: 0 rss: 73Mb L: 73/73 MS: 1 InsertRepeatedBytes- 00:07:22.245 [2024-12-12 10:09:35.871967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012702130442992121 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.245 [2024-12-12 10:09:35.872001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.245 [2024-12-12 10:09:35.872107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.245 [2024-12-12 10:09:35.872130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.245 [2024-12-12 10:09:35.872261] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.245 [2024-12-12 10:09:35.872284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.504 #14 NEW cov: 12361 ft: 13174 corp: 4/212b lim: 105 exec/s: 0 rss: 73Mb L: 73/73 MS: 1 ChangeByte- 00:07:22.504 [2024-12-12 10:09:35.942204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:35.942236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:35.942366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:35.942393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:35.942526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:35.942548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.504 #15 NEW cov: 12446 ft: 13411 corp: 5/285b lim: 105 exec/s: 0 rss: 73Mb L: 73/73 MS: 1 CopyPart- 00:07:22.504 [2024-12-12 10:09:35.992511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:35.992540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:35.992646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:35.992668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:35.992814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:35.992839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:35.992966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:35.992990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:22.504 #16 NEW cov: 12446 ft: 14046 corp: 6/380b lim: 105 exec/s: 0 rss: 73Mb L: 95/95 MS: 1 CrossOver- 00:07:22.504 [2024-12-12 10:09:36.062455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:36.062487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:36.062587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:36.062616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:36.062763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:36.062790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.504 #17 NEW cov: 12446 ft: 14143 corp: 7/453b lim: 105 exec/s: 0 rss: 73Mb L: 73/95 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\013"- 00:07:22.504 [2024-12-12 10:09:36.112643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012702130442992121 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:36.112675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:36.112782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.504 [2024-12-12 10:09:36.112806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.504 [2024-12-12 10:09:36.112942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.505 [2024-12-12 10:09:36.112967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.764 #18 NEW cov: 12446 ft: 14215 corp: 8/526b lim: 105 exec/s: 0 rss: 73Mb L: 73/95 MS: 1 ShuffleBytes- 00:07:22.764 [2024-12-12 10:09:36.182932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.182965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.183098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.183119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.183260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.183282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.764 #19 NEW cov: 12446 ft: 14235 corp: 9/591b lim: 105 exec/s: 0 rss: 73Mb L: 65/95 MS: 1 CopyPart- 00:07:22.764 [2024-12-12 10:09:36.253124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.253155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.253265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.253286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.253421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.253447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.764 #20 NEW cov: 12446 ft: 14258 corp: 10/664b lim: 105 exec/s: 0 rss: 73Mb L: 73/95 MS: 1 ChangeByte- 00:07:22.764 [2024-12-12 10:09:36.323254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.323289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.323399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.323427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.323557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.323581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:22.764 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:22.764 #21 NEW cov: 12469 ft: 14316 corp: 11/730b lim: 105 exec/s: 0 rss: 73Mb L: 66/95 MS: 1 EraseBytes- 00:07:22.764 [2024-12-12 10:09:36.373450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.373484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.373581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.373600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:22.764 [2024-12-12 10:09:36.373753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:22.764 [2024-12-12 10:09:36.373779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.024 #27 NEW cov: 12469 ft: 14349 corp: 12/803b lim: 105 exec/s: 0 rss: 73Mb L: 73/95 MS: 1 ChangeBit- 00:07:23.024 [2024-12-12 10:09:36.443657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.443691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.443821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.443849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.443989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.444016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.024 #28 NEW cov: 12469 ft: 14371 corp: 13/876b lim: 105 exec/s: 28 rss: 74Mb L: 73/95 MS: 1 ShuffleBytes- 00:07:23.024 [2024-12-12 10:09:36.493846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012702130442992121 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.493879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.493997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.494024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.494159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036668901881 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.494185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.024 #29 NEW cov: 12469 ft: 14407 corp: 14/949b lim: 105 exec/s: 29 rss: 74Mb L: 73/95 MS: 1 ChangeByte- 00:07:23.024 [2024-12-12 10:09:36.564039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.564071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.564166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316419617808223 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.564191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.564326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.564351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.024 #30 NEW cov: 12469 ft: 14451 corp: 15/1014b lim: 105 exec/s: 30 rss: 74Mb L: 65/95 MS: 1 ChangeBinInt- 00:07:23.024 [2024-12-12 10:09:36.614163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012702130442992121 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.614198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.614329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.614355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.024 [2024-12-12 10:09:36.614502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.024 [2024-12-12 10:09:36.614527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.024 #31 NEW cov: 12469 ft: 14459 corp: 16/1087b lim: 105 exec/s: 31 rss: 74Mb L: 73/95 MS: 1 ChangeBinInt- 00:07:23.283 [2024-12-12 10:09:36.664378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.283 [2024-12-12 10:09:36.664414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.283 [2024-12-12 10:09:36.664531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.283 [2024-12-12 10:09:36.664554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.283 [2024-12-12 10:09:36.664684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.283 [2024-12-12 10:09:36.664718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.283 #32 NEW cov: 12469 ft: 14489 corp: 17/1154b lim: 105 exec/s: 32 rss: 74Mb L: 67/95 MS: 1 InsertByte- 00:07:23.283 [2024-12-12 10:09:36.734633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.283 [2024-12-12 10:09:36.734666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.283 [2024-12-12 10:09:36.734782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.283 [2024-12-12 10:09:36.734804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.283 [2024-12-12 10:09:36.734939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.734967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.284 #33 NEW cov: 12469 ft: 14522 corp: 18/1219b lim: 105 exec/s: 33 rss: 74Mb L: 65/95 MS: 1 ChangeBit- 00:07:23.284 [2024-12-12 10:09:36.784722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.784756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.284 [2024-12-12 10:09:36.784865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703032487182347 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.784890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.284 [2024-12-12 10:09:36.785030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11167231603077937657 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.785057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.284 #34 NEW cov: 12469 ft: 14536 corp: 19/1294b lim: 105 exec/s: 34 rss: 74Mb L: 75/95 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\013"- 00:07:23.284 [2024-12-12 10:09:36.854929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.854962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.284 [2024-12-12 10:09:36.855085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:862995651460333568 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.855114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.284 [2024-12-12 10:09:36.855244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.855268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.284 #35 NEW cov: 12469 ft: 14554 corp: 20/1375b lim: 105 exec/s: 35 rss: 74Mb L: 81/95 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\013"- 00:07:23.284 [2024-12-12 10:09:36.905284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.905315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.284 [2024-12-12 10:09:36.905416] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872345118589280095 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.905440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.284 [2024-12-12 10:09:36.905568] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.905592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.284 [2024-12-12 10:09:36.905735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.284 [2024-12-12 10:09:36.905760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.543 #36 NEW cov: 12469 ft: 14614 corp: 21/1474b lim: 105 exec/s: 36 rss: 74Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:07:23.543 [2024-12-12 10:09:36.975561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:36.975596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.543 [2024-12-12 10:09:36.975721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012702903537105401 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:36.975743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.543 [2024-12-12 10:09:36.975879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:36.975905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.543 [2024-12-12 10:09:36.976044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:36.976073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.543 #37 NEW cov: 12469 ft: 14665 corp: 22/1569b lim: 105 exec/s: 37 rss: 74Mb L: 95/99 MS: 1 ChangeByte- 00:07:23.543 [2024-12-12 10:09:37.045437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:37.045463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.543 [2024-12-12 10:09:37.045611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:37.045636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.543 #38 NEW cov: 12469 ft: 15067 corp: 23/1630b lim: 105 exec/s: 38 rss: 74Mb L: 61/99 MS: 1 EraseBytes- 00:07:23.543 [2024-12-12 10:09:37.095951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:37.095984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.543 [2024-12-12 10:09:37.096083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6872345118589280095 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:37.096107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.543 [2024-12-12 10:09:37.096237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:8753160913407277433 len:31098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:37.096261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.543 [2024-12-12 10:09:37.096391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:37.096416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.543 #39 NEW cov: 12469 ft: 15148 corp: 24/1729b lim: 105 exec/s: 39 rss: 74Mb L: 99/99 MS: 1 ChangeBit- 00:07:23.543 [2024-12-12 10:09:37.165592] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.543 [2024-12-12 10:09:37.165619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.803 #40 NEW cov: 12469 ft: 15574 corp: 25/1765b lim: 105 exec/s: 40 rss: 74Mb L: 36/99 MS: 1 EraseBytes- 00:07:23.803 [2024-12-12 10:09:37.216082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.216115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.803 [2024-12-12 10:09:37.216241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.216266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.803 [2024-12-12 10:09:37.216405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.216426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.803 #41 NEW cov: 12469 ft: 15645 corp: 26/1839b lim: 105 exec/s: 41 rss: 74Mb L: 74/99 MS: 1 InsertByte- 00:07:23.803 [2024-12-12 10:09:37.266246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18012703036681091577 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.266279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.803 [2024-12-12 10:09:37.266411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18012703032487182347 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.266439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.803 [2024-12-12 10:09:37.266570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:11167231603077937657 len:63994 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.266591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.803 #42 NEW cov: 12469 ft: 15661 corp: 27/1914b lim: 105 exec/s: 42 rss: 74Mb L: 75/99 MS: 1 ChangeBinInt- 00:07:23.803 [2024-12-12 10:09:37.335998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:24416 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.336033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.803 #43 NEW cov: 12469 ft: 15721 corp: 28/1950b lim: 105 exec/s: 43 rss: 74Mb L: 36/99 MS: 1 ChangeBit- 00:07:23.803 [2024-12-12 10:09:37.406887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744072065384447 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.406915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:23.803 [2024-12-12 10:09:37.406999] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.407024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:23.803 [2024-12-12 10:09:37.407154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.407179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:23.803 [2024-12-12 10:09:37.407311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.803 [2024-12-12 10:09:37.407337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:23.803 #48 NEW cov: 12469 ft: 15734 corp: 29/2051b lim: 105 exec/s: 48 rss: 74Mb L: 101/101 MS: 5 InsertByte-ShuffleBytes-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:07:24.062 [2024-12-12 10:09:37.456471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6872316419617283935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.062 [2024-12-12 10:09:37.456497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.062 #49 NEW cov: 12469 ft: 15759 corp: 30/2087b lim: 105 exec/s: 24 rss: 75Mb L: 36/101 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:24.062 #49 DONE cov: 12469 ft: 15759 corp: 30/2087b lim: 105 exec/s: 24 rss: 75Mb 00:07:24.062 ###### Recommended dictionary. ###### 00:07:24.062 "\000\000\000\000\000\000\000\013" # Uses: 3 00:07:24.062 "\377\377\377\377" # Uses: 0 00:07:24.062 ###### End of recommended dictionary. ###### 00:07:24.062 Done 49 runs in 2 second(s) 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:24.062 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:24.063 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:24.063 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:24.063 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:24.063 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:24.063 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.063 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:24.063 10:09:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:24.063 [2024-12-12 10:09:37.630849] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:24.063 [2024-12-12 10:09:37.630910] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476967 ] 00:07:24.321 [2024-12-12 10:09:37.900138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.321 [2024-12-12 10:09:37.952520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.581 [2024-12-12 10:09:38.011514] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.581 [2024-12-12 10:09:38.027849] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:24.581 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.581 INFO: Seed: 2998613007 00:07:24.581 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:24.581 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:24.581 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:24.581 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.581 #2 INITED exec/s: 0 rss: 66Mb 00:07:24.581 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.581 This may also happen if the target rejected all inputs we tried so far 00:07:24.581 [2024-12-12 10:09:38.072614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.581 [2024-12-12 10:09:38.072647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:24.581 [2024-12-12 10:09:38.072700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.581 [2024-12-12 10:09:38.072727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:24.840 NEW_FUNC[1/718]: 0x455aa8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:24.840 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.840 #3 NEW cov: 12263 ft: 12259 corp: 2/60b lim: 120 exec/s: 0 rss: 74Mb L: 59/59 MS: 1 InsertRepeatedBytes- 00:07:24.840 [2024-12-12 10:09:38.443485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530244471566255 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.840 [2024-12-12 10:09:38.443523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.099 #8 NEW cov: 12376 ft: 13628 corp: 3/87b lim: 120 exec/s: 0 rss: 74Mb L: 27/59 MS: 5 CopyPart-EraseBytes-CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:25.099 [2024-12-12 10:09:38.503628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.099 [2024-12-12 10:09:38.503659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.099 [2024-12-12 10:09:38.503707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.099 [2024-12-12 10:09:38.503733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.099 [2024-12-12 10:09:38.503763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.099 [2024-12-12 10:09:38.503780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.099 #14 NEW cov: 12382 ft: 14180 corp: 4/162b lim: 120 exec/s: 0 rss: 74Mb L: 75/75 MS: 1 CopyPart- 00:07:25.099 [2024-12-12 10:09:38.603792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530244471566255 len:20561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.099 [2024-12-12 10:09:38.603823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.099 #15 NEW cov: 12467 ft: 14587 corp: 5/189b lim: 120 exec/s: 0 rss: 74Mb L: 27/75 MS: 1 ChangeBinInt- 00:07:25.099 [2024-12-12 10:09:38.694007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530244471566255 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.099 [2024-12-12 10:09:38.694038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.099 #16 NEW cov: 12467 ft: 14700 corp: 6/216b lim: 120 exec/s: 0 rss: 74Mb L: 27/75 MS: 1 CopyPart- 00:07:25.358 [2024-12-12 10:09:38.754207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.754237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.358 [2024-12-12 10:09:38.754286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.754304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.358 #17 NEW cov: 12467 ft: 14791 corp: 7/276b lim: 120 exec/s: 0 rss: 74Mb L: 60/75 MS: 1 InsertByte- 00:07:25.358 [2024-12-12 10:09:38.814307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530244471566255 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.814336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.358 #18 NEW cov: 12467 ft: 14896 corp: 8/303b lim: 120 exec/s: 0 rss: 74Mb L: 27/75 MS: 1 ChangeBinInt- 00:07:25.358 [2024-12-12 10:09:38.874515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.874544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.358 [2024-12-12 10:09:38.874593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.874615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.358 #19 NEW cov: 12467 ft: 14951 corp: 9/362b lim: 120 exec/s: 0 rss: 74Mb L: 59/75 MS: 1 CopyPart- 00:07:25.358 [2024-12-12 10:09:38.924645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.924674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.358 [2024-12-12 10:09:38.924728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12659530243715902730 len:44872 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.924746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.358 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:25.358 #20 NEW cov: 12490 ft: 14990 corp: 10/425b lim: 120 exec/s: 0 rss: 74Mb L: 63/75 MS: 1 CrossOver- 00:07:25.358 [2024-12-12 10:09:38.974926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10561665232664695442 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.974956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.358 [2024-12-12 10:09:38.975003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.975020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.358 [2024-12-12 10:09:38.975051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.975067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.358 [2024-12-12 10:09:38.975096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10561665234359194258 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.358 [2024-12-12 10:09:38.975112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.617 #21 NEW cov: 12490 ft: 15478 corp: 11/532b lim: 120 exec/s: 0 rss: 74Mb L: 107/107 MS: 1 InsertRepeatedBytes- 00:07:25.617 [2024-12-12 10:09:39.064993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.617 [2024-12-12 10:09:39.065022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.617 #29 NEW cov: 12490 ft: 15510 corp: 12/557b lim: 120 exec/s: 29 rss: 74Mb L: 25/107 MS: 3 ChangeByte-InsertRepeatedBytes-CopyPart- 00:07:25.617 [2024-12-12 10:09:39.125283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10561665232664695442 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.617 [2024-12-12 10:09:39.125312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.617 [2024-12-12 10:09:39.125359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.617 [2024-12-12 10:09:39.125376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.617 [2024-12-12 10:09:39.125406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.617 [2024-12-12 10:09:39.125423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.617 [2024-12-12 10:09:39.125455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10561665234359194258 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.617 [2024-12-12 10:09:39.125471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.617 #30 NEW cov: 12490 ft: 15532 corp: 13/664b lim: 120 exec/s: 30 rss: 74Mb L: 107/107 MS: 1 ChangeBit- 00:07:25.617 [2024-12-12 10:09:39.215555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10561665232664695442 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.617 [2024-12-12 10:09:39.215586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.618 [2024-12-12 10:09:39.215620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.618 [2024-12-12 10:09:39.215638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.618 [2024-12-12 10:09:39.215669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.618 [2024-12-12 10:09:39.215686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.877 #31 NEW cov: 12490 ft: 15570 corp: 14/759b lim: 120 exec/s: 31 rss: 75Mb L: 95/107 MS: 1 EraseBytes- 00:07:25.877 [2024-12-12 10:09:39.305793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10561665232664695442 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.305822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.877 [2024-12-12 10:09:39.305869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.305886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.877 [2024-12-12 10:09:39.305916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.305932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.877 [2024-12-12 10:09:39.305960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10561665234359194258 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.305976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:25.877 #32 NEW cov: 12490 ft: 15623 corp: 15/866b lim: 120 exec/s: 32 rss: 75Mb L: 107/107 MS: 1 ChangeBinInt- 00:07:25.877 [2024-12-12 10:09:39.365857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.365886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.877 [2024-12-12 10:09:39.365933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.365950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.877 [2024-12-12 10:09:39.365980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.365997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:25.877 #33 NEW cov: 12490 ft: 15625 corp: 16/944b lim: 120 exec/s: 33 rss: 75Mb L: 78/107 MS: 1 CopyPart- 00:07:25.877 [2024-12-12 10:09:39.415920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.415949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.877 [2024-12-12 10:09:39.415996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12659530243715902730 len:44872 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.416014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:25.877 #34 NEW cov: 12490 ft: 15646 corp: 17/1007b lim: 120 exec/s: 34 rss: 75Mb L: 63/107 MS: 1 ChangeByte- 00:07:25.877 [2024-12-12 10:09:39.506183] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.506212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:25.877 [2024-12-12 10:09:39.506259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12610078957984853935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.877 [2024-12-12 10:09:39.506277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.135 #35 NEW cov: 12490 ft: 15671 corp: 18/1057b lim: 120 exec/s: 35 rss: 75Mb L: 50/107 MS: 1 EraseBytes- 00:07:26.135 [2024-12-12 10:09:39.596428] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.136 [2024-12-12 10:09:39.596458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.136 [2024-12-12 10:09:39.596506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:12659530243715902730 len:44872 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.136 [2024-12-12 10:09:39.596523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.136 #36 NEW cov: 12490 ft: 15733 corp: 19/1120b lim: 120 exec/s: 36 rss: 75Mb L: 63/107 MS: 1 ChangeBit- 00:07:26.136 [2024-12-12 10:09:39.646636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10561665232664695442 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.136 [2024-12-12 10:09:39.646665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.136 [2024-12-12 10:09:39.646711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.136 [2024-12-12 10:09:39.646734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.136 [2024-12-12 10:09:39.646764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10561665234359194258 len:6547 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.136 [2024-12-12 10:09:39.646781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.136 [2024-12-12 10:09:39.646810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10561665234359194258 len:2736 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.136 [2024-12-12 10:09:39.646826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.136 #37 NEW cov: 12490 ft: 15750 corp: 20/1228b lim: 120 exec/s: 37 rss: 75Mb L: 108/108 MS: 1 InsertByte- 00:07:26.136 [2024-12-12 10:09:39.736702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12659530244471566255 len:20561 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.136 [2024-12-12 10:09:39.736743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.395 #38 NEW cov: 12490 ft: 15775 corp: 21/1255b lim: 120 exec/s: 38 rss: 75Mb L: 27/108 MS: 1 ShuffleBytes- 00:07:26.395 [2024-12-12 10:09:39.837148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10561102282711274130 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.837180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.395 [2024-12-12 10:09:39.837227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.837245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.395 [2024-12-12 10:09:39.837275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10561665234359194258 len:37523 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.837291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.395 [2024-12-12 10:09:39.837320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10561665234359194258 len:44976 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.837336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:26.395 #39 NEW cov: 12490 ft: 15786 corp: 22/1362b lim: 120 exec/s: 39 rss: 75Mb L: 107/108 MS: 1 ChangeBit- 00:07:26.395 [2024-12-12 10:09:39.887193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.887223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.395 [2024-12-12 10:09:39.887273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:51712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.887301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.395 #40 NEW cov: 12490 ft: 15804 corp: 23/1422b lim: 120 exec/s: 40 rss: 75Mb L: 60/108 MS: 1 ChangeByte- 00:07:26.395 [2024-12-12 10:09:39.977392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.977421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.395 [2024-12-12 10:09:39.977468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:39.977486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.395 #46 NEW cov: 12490 ft: 15857 corp: 24/1481b lim: 120 exec/s: 46 rss: 75Mb L: 59/108 MS: 1 CrossOver- 00:07:26.395 [2024-12-12 10:09:40.027653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:40.027685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:26.395 [2024-12-12 10:09:40.027725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:40.027744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:26.395 [2024-12-12 10:09:40.027775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:58112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.395 [2024-12-12 10:09:40.027801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:26.654 #47 NEW cov: 12490 ft: 15914 corp: 25/1556b lim: 120 exec/s: 23 rss: 75Mb L: 75/108 MS: 1 ChangeByte- 00:07:26.654 #47 DONE cov: 12490 ft: 15914 corp: 25/1556b lim: 120 exec/s: 23 rss: 75Mb 00:07:26.654 Done 47 runs in 2 second(s) 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.654 10:09:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:26.654 [2024-12-12 10:09:40.262355] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:26.654 [2024-12-12 10:09:40.262428] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477489 ] 00:07:26.913 [2024-12-12 10:09:40.460541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.913 [2024-12-12 10:09:40.491380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.913 [2024-12-12 10:09:40.550418] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.172 [2024-12-12 10:09:40.566743] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:27.172 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.172 INFO: Seed: 1242658542 00:07:27.172 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:27.172 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:27.172 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:27.172 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.172 #2 INITED exec/s: 0 rss: 67Mb 00:07:27.172 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.172 This may also happen if the target rejected all inputs we tried so far 00:07:27.172 [2024-12-12 10:09:40.633432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.172 [2024-12-12 10:09:40.633471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.172 [2024-12-12 10:09:40.633597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.172 [2024-12-12 10:09:40.633621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.172 [2024-12-12 10:09:40.633752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.172 [2024-12-12 10:09:40.633778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.172 [2024-12-12 10:09:40.633908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:27.172 [2024-12-12 10:09:40.633931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:27.431 NEW_FUNC[1/713]: 0x459398 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:27.431 NEW_FUNC[2/713]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:27.431 #19 NEW cov: 12186 ft: 12207 corp: 2/83b lim: 100 exec/s: 0 rss: 73Mb L: 82/82 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:27.431 [2024-12-12 10:09:40.983570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.431 [2024-12-12 10:09:40.983612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.431 NEW_FUNC[1/3]: 0x1fb44c8 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:830 00:07:27.431 NEW_FUNC[2/3]: 0x1fb4668 in spdk_thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1209 00:07:27.431 #21 NEW cov: 12319 ft: 13152 corp: 3/117b lim: 100 exec/s: 0 rss: 73Mb L: 34/82 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:27.431 [2024-12-12 10:09:41.043687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.431 [2024-12-12 10:09:41.043722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.690 #22 NEW cov: 12325 ft: 13291 corp: 4/151b lim: 100 exec/s: 0 rss: 74Mb L: 34/82 MS: 1 ChangeBinInt- 00:07:27.690 [2024-12-12 10:09:41.113763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.690 [2024-12-12 10:09:41.113790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.690 #23 NEW cov: 12410 ft: 13604 corp: 5/172b lim: 100 exec/s: 0 rss: 74Mb L: 21/82 MS: 1 EraseBytes- 00:07:27.690 [2024-12-12 10:09:41.164180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.690 [2024-12-12 10:09:41.164211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.690 [2024-12-12 10:09:41.164333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.690 [2024-12-12 10:09:41.164355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.690 #25 NEW cov: 12410 ft: 14067 corp: 6/229b lim: 100 exec/s: 0 rss: 74Mb L: 57/82 MS: 2 CrossOver-CrossOver- 00:07:27.690 [2024-12-12 10:09:41.214101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.690 [2024-12-12 10:09:41.214126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.690 #26 NEW cov: 12410 ft: 14132 corp: 7/263b lim: 100 exec/s: 0 rss: 74Mb L: 34/82 MS: 1 ChangeByte- 00:07:27.690 [2024-12-12 10:09:41.284599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.690 [2024-12-12 10:09:41.284632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.690 [2024-12-12 10:09:41.284743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.690 [2024-12-12 10:09:41.284766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.690 [2024-12-12 10:09:41.284883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.690 [2024-12-12 10:09:41.284908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.949 #27 NEW cov: 12410 ft: 14441 corp: 8/331b lim: 100 exec/s: 0 rss: 74Mb L: 68/82 MS: 1 CrossOver- 00:07:27.949 [2024-12-12 10:09:41.354819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.949 [2024-12-12 10:09:41.354848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.949 [2024-12-12 10:09:41.354955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:27.949 [2024-12-12 10:09:41.354977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:27.949 [2024-12-12 10:09:41.355097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:27.949 [2024-12-12 10:09:41.355118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:27.949 #28 NEW cov: 12410 ft: 14479 corp: 9/399b lim: 100 exec/s: 0 rss: 74Mb L: 68/82 MS: 1 ChangeByte- 00:07:27.949 [2024-12-12 10:09:41.424755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.949 [2024-12-12 10:09:41.424784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.949 #29 NEW cov: 12410 ft: 14491 corp: 10/419b lim: 100 exec/s: 0 rss: 74Mb L: 20/82 MS: 1 EraseBytes- 00:07:27.949 [2024-12-12 10:09:41.474911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.949 [2024-12-12 10:09:41.474936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.949 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:27.949 #30 NEW cov: 12433 ft: 14537 corp: 11/453b lim: 100 exec/s: 0 rss: 74Mb L: 34/82 MS: 1 ChangeBinInt- 00:07:27.949 [2024-12-12 10:09:41.525002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:27.949 [2024-12-12 10:09:41.525027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:27.949 #36 NEW cov: 12433 ft: 14549 corp: 12/487b lim: 100 exec/s: 0 rss: 74Mb L: 34/82 MS: 1 ChangeByte- 00:07:28.209 [2024-12-12 10:09:41.595189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.209 [2024-12-12 10:09:41.595214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.209 #37 NEW cov: 12433 ft: 14573 corp: 13/521b lim: 100 exec/s: 37 rss: 74Mb L: 34/82 MS: 1 ShuffleBytes- 00:07:28.209 [2024-12-12 10:09:41.645390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.209 [2024-12-12 10:09:41.645418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.209 #38 NEW cov: 12433 ft: 14621 corp: 14/556b lim: 100 exec/s: 38 rss: 74Mb L: 35/82 MS: 1 InsertByte- 00:07:28.209 [2024-12-12 10:09:41.695481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.209 [2024-12-12 10:09:41.695505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.209 #39 NEW cov: 12433 ft: 14663 corp: 15/590b lim: 100 exec/s: 39 rss: 74Mb L: 34/82 MS: 1 ChangeBinInt- 00:07:28.209 [2024-12-12 10:09:41.745879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.209 [2024-12-12 10:09:41.745909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.209 [2024-12-12 10:09:41.746035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.209 [2024-12-12 10:09:41.746055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.209 #40 NEW cov: 12433 ft: 14719 corp: 16/634b lim: 100 exec/s: 40 rss: 74Mb L: 44/82 MS: 1 EraseBytes- 00:07:28.209 [2024-12-12 10:09:41.795815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.209 [2024-12-12 10:09:41.795840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.209 #41 NEW cov: 12433 ft: 14731 corp: 17/668b lim: 100 exec/s: 41 rss: 74Mb L: 34/82 MS: 1 ChangeBit- 00:07:28.209 [2024-12-12 10:09:41.846020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.209 [2024-12-12 10:09:41.846044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.468 #42 NEW cov: 12433 ft: 14764 corp: 18/696b lim: 100 exec/s: 42 rss: 74Mb L: 28/82 MS: 1 EraseBytes- 00:07:28.468 [2024-12-12 10:09:41.896119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.468 [2024-12-12 10:09:41.896143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.468 #43 NEW cov: 12433 ft: 14770 corp: 19/721b lim: 100 exec/s: 43 rss: 74Mb L: 25/82 MS: 1 EraseBytes- 00:07:28.468 [2024-12-12 10:09:41.966374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.468 [2024-12-12 10:09:41.966404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.468 #44 NEW cov: 12433 ft: 14859 corp: 20/755b lim: 100 exec/s: 44 rss: 74Mb L: 34/82 MS: 1 ChangeBinInt- 00:07:28.468 [2024-12-12 10:09:42.036664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.468 [2024-12-12 10:09:42.036688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.468 #45 NEW cov: 12433 ft: 14895 corp: 21/780b lim: 100 exec/s: 45 rss: 74Mb L: 25/82 MS: 1 ChangeByte- 00:07:28.728 [2024-12-12 10:09:42.107217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.728 [2024-12-12 10:09:42.107250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.107373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.728 [2024-12-12 10:09:42.107394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.107522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.728 [2024-12-12 10:09:42.107546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.728 #46 NEW cov: 12433 ft: 14912 corp: 22/848b lim: 100 exec/s: 46 rss: 75Mb L: 68/82 MS: 1 ChangeBit- 00:07:28.728 [2024-12-12 10:09:42.177479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.728 [2024-12-12 10:09:42.177511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.177628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.728 [2024-12-12 10:09:42.177653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.177780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.728 [2024-12-12 10:09:42.177804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.177936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.728 [2024-12-12 10:09:42.177959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.728 #47 NEW cov: 12433 ft: 14926 corp: 23/944b lim: 100 exec/s: 47 rss: 75Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:07:28.728 [2024-12-12 10:09:42.247831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.728 [2024-12-12 10:09:42.247861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.247944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.728 [2024-12-12 10:09:42.247965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.248087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.728 [2024-12-12 10:09:42.248112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.248230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.728 [2024-12-12 10:09:42.248249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.248372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:07:28.728 [2024-12-12 10:09:42.248395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:28.728 #48 NEW cov: 12433 ft: 14964 corp: 24/1044b lim: 100 exec/s: 48 rss: 75Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:07:28.728 [2024-12-12 10:09:42.317498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.728 [2024-12-12 10:09:42.317530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.728 [2024-12-12 10:09:42.317642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.728 [2024-12-12 10:09:42.317666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.728 #49 NEW cov: 12433 ft: 14973 corp: 25/1088b lim: 100 exec/s: 49 rss: 75Mb L: 44/100 MS: 1 ShuffleBytes- 00:07:28.987 [2024-12-12 10:09:42.367542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.987 [2024-12-12 10:09:42.367566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.987 #50 NEW cov: 12433 ft: 14981 corp: 26/1113b lim: 100 exec/s: 50 rss: 75Mb L: 25/100 MS: 1 ChangeByte- 00:07:28.987 [2024-12-12 10:09:42.417616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.987 [2024-12-12 10:09:42.417639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.987 #51 NEW cov: 12433 ft: 14994 corp: 27/1147b lim: 100 exec/s: 51 rss: 75Mb L: 34/100 MS: 1 ChangeByte- 00:07:28.987 [2024-12-12 10:09:42.488168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.987 [2024-12-12 10:09:42.488197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.987 [2024-12-12 10:09:42.488295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.987 [2024-12-12 10:09:42.488320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.987 #52 NEW cov: 12433 ft: 15027 corp: 28/1204b lim: 100 exec/s: 52 rss: 75Mb L: 57/100 MS: 1 ChangeBinInt- 00:07:28.987 [2024-12-12 10:09:42.538290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.987 [2024-12-12 10:09:42.538320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.987 [2024-12-12 10:09:42.538415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.987 [2024-12-12 10:09:42.538438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.987 [2024-12-12 10:09:42.538564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.987 [2024-12-12 10:09:42.538586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.987 #53 NEW cov: 12433 ft: 15034 corp: 29/1264b lim: 100 exec/s: 53 rss: 75Mb L: 60/100 MS: 1 CrossOver- 00:07:28.987 [2024-12-12 10:09:42.588764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:28.987 [2024-12-12 10:09:42.588793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:28.987 [2024-12-12 10:09:42.588885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:28.987 [2024-12-12 10:09:42.588910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:28.987 [2024-12-12 10:09:42.589030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:28.987 [2024-12-12 10:09:42.589053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:28.987 [2024-12-12 10:09:42.589175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:28.987 [2024-12-12 10:09:42.589200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:28.987 #55 NEW cov: 12433 ft: 15038 corp: 30/1347b lim: 100 exec/s: 27 rss: 75Mb L: 83/100 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:28.987 #55 DONE cov: 12433 ft: 15038 corp: 30/1347b lim: 100 exec/s: 27 rss: 75Mb 00:07:28.987 Done 55 runs in 2 second(s) 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:29.247 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:29.248 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:29.248 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:29.248 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.248 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:29.248 10:09:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:29.248 [2024-12-12 10:09:42.760083] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:29.248 [2024-12-12 10:09:42.760160] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477787 ] 00:07:29.506 [2024-12-12 10:09:43.035633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.507 [2024-12-12 10:09:43.083844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.507 [2024-12-12 10:09:43.142790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.766 [2024-12-12 10:09:43.159109] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:29.766 INFO: Running with entropic power schedule (0xFF, 100). 00:07:29.766 INFO: Seed: 3834651326 00:07:29.766 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:29.766 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:29.766 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:29.766 INFO: A corpus is not provided, starting from an empty corpus 00:07:29.766 #2 INITED exec/s: 0 rss: 65Mb 00:07:29.766 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:29.766 This may also happen if the target rejected all inputs we tried so far 00:07:29.766 [2024-12-12 10:09:43.214663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:29.766 [2024-12-12 10:09:43.214694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:29.766 [2024-12-12 10:09:43.214741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:29.766 [2024-12-12 10:09:43.214757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:29.766 [2024-12-12 10:09:43.214809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:29.766 [2024-12-12 10:09:43.214825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:29.766 [2024-12-12 10:09:43.214878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:29.766 [2024-12-12 10:09:43.214893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.025 NEW_FUNC[1/715]: 0x45c358 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:30.025 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:30.025 #8 NEW cov: 12175 ft: 12183 corp: 2/45b lim: 50 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:30.025 [2024-12-12 10:09:43.545795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.545883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.546004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.546034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.546105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:30.025 [2024-12-12 10:09:43.546134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.546207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.546235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.025 NEW_FUNC[1/1]: 0x20066b8 in msg_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:843 00:07:30.025 #19 NEW cov: 12297 ft: 12965 corp: 3/89b lim: 50 exec/s: 0 rss: 74Mb L: 44/44 MS: 1 ChangeBit- 00:07:30.025 [2024-12-12 10:09:43.615510] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.615537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.615582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.615597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.615648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.615662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.615720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913600 len:9767 00:07:30.025 [2024-12-12 10:09:43.615735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.025 #20 NEW cov: 12303 ft: 13253 corp: 4/134b lim: 50 exec/s: 0 rss: 74Mb L: 45/45 MS: 1 InsertByte- 00:07:30.025 [2024-12-12 10:09:43.655617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.655643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.655684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:30.025 [2024-12-12 10:09:43.655700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.655754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:30.025 [2024-12-12 10:09:43.655772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.025 [2024-12-12 10:09:43.655822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.025 [2024-12-12 10:09:43.655835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.285 #21 NEW cov: 12388 ft: 13548 corp: 5/178b lim: 50 exec/s: 0 rss: 74Mb L: 44/45 MS: 1 ChangeBinInt- 00:07:30.285 [2024-12-12 10:09:43.715776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.715806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.715839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:30.285 [2024-12-12 10:09:43.715854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.715902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2749038052313540134 len:35623 00:07:30.285 [2024-12-12 10:09:43.715917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.715966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.715982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.285 #22 NEW cov: 12388 ft: 13673 corp: 6/225b lim: 50 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:07:30.285 [2024-12-12 10:09:43.775937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.775963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.776008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.776023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.776072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3325387320150337062 len:9767 00:07:30.285 [2024-12-12 10:09:43.776087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.776138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.776153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.285 #23 NEW cov: 12388 ft: 13886 corp: 7/272b lim: 50 exec/s: 0 rss: 74Mb L: 47/47 MS: 1 CopyPart- 00:07:30.285 [2024-12-12 10:09:43.816054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.816080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.816123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:30.285 [2024-12-12 10:09:43.816138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.816190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437860 len:9767 00:07:30.285 [2024-12-12 10:09:43.816224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.816275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.816291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.285 #24 NEW cov: 12388 ft: 13928 corp: 8/316b lim: 50 exec/s: 0 rss: 74Mb L: 44/47 MS: 1 ChangeBit- 00:07:30.285 [2024-12-12 10:09:43.856190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.856217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.856262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.856277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.856327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:30.285 [2024-12-12 10:09:43.856342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.856392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567848551974 len:9767 00:07:30.285 [2024-12-12 10:09:43.856407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.285 #25 NEW cov: 12388 ft: 13944 corp: 9/360b lim: 50 exec/s: 0 rss: 74Mb L: 44/47 MS: 1 ChangeByte- 00:07:30.285 [2024-12-12 10:09:43.896270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926571487569446 len:9767 00:07:30.285 [2024-12-12 10:09:43.896297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.896341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.896357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.896405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:30.285 [2024-12-12 10:09:43.896420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.285 [2024-12-12 10:09:43.896472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:4622424012833039910 len:9767 00:07:30.285 [2024-12-12 10:09:43.896486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.285 #30 NEW cov: 12388 ft: 14002 corp: 10/408b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 5 ShuffleBytes-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes-CrossOver- 00:07:30.545 [2024-12-12 10:09:43.936414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:193273528320 len:9767 00:07:30.545 [2024-12-12 10:09:43.936440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:43.936486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:43.936501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:43.936558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:43.936574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:43.936624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913600 len:9767 00:07:30.545 [2024-12-12 10:09:43.936639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.545 #31 NEW cov: 12388 ft: 14042 corp: 11/453b lim: 50 exec/s: 0 rss: 74Mb L: 45/48 MS: 1 ChangeBinInt- 00:07:30.545 [2024-12-12 10:09:43.996559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:43.996585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:43.996629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:43.996644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:43.996694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748986173403571750 len:23645 00:07:30.545 [2024-12-12 10:09:43.996709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:43.996764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:43.996778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.545 #32 NEW cov: 12388 ft: 14056 corp: 12/501b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:07:30.545 [2024-12-12 10:09:44.036658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:44.036684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:44.036735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:44.036750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:44.036800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748986173403571750 len:23645 00:07:30.545 [2024-12-12 10:09:44.036822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:44.036871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:44.036886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.545 #33 NEW cov: 12388 ft: 14068 corp: 13/549b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:07:30.545 [2024-12-12 10:09:44.096604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926546439185958 len:9775 00:07:30.545 [2024-12-12 10:09:44.096630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:44.096663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846782502 len:9767 00:07:30.545 [2024-12-12 10:09:44.096681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.545 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:30.545 #37 NEW cov: 12411 ft: 14424 corp: 14/573b lim: 50 exec/s: 0 rss: 74Mb L: 24/48 MS: 4 ChangeBit-ChangeByte-CopyPart-CrossOver- 00:07:30.545 [2024-12-12 10:09:44.136725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567377151534 len:9767 00:07:30.545 [2024-12-12 10:09:44.136753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.545 [2024-12-12 10:09:44.136804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:44.136819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.545 #38 NEW cov: 12411 ft: 14426 corp: 15/599b lim: 50 exec/s: 0 rss: 74Mb L: 26/48 MS: 1 CrossOver- 00:07:30.545 [2024-12-12 10:09:44.177057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.545 [2024-12-12 10:09:44.177083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.546 [2024-12-12 10:09:44.177126] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:30.546 [2024-12-12 10:09:44.177142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.546 [2024-12-12 10:09:44.177191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15697817505878890971 len:9767 00:07:30.546 [2024-12-12 10:09:44.177206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.546 [2024-12-12 10:09:44.177257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.546 [2024-12-12 10:09:44.177272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.804 #39 NEW cov: 12411 ft: 14467 corp: 16/643b lim: 50 exec/s: 39 rss: 74Mb L: 44/48 MS: 1 ChangeBinInt- 00:07:30.804 [2024-12-12 10:09:44.237201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.804 [2024-12-12 10:09:44.237228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.804 [2024-12-12 10:09:44.237271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.804 [2024-12-12 10:09:44.237286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.804 [2024-12-12 10:09:44.237335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:30.804 [2024-12-12 10:09:44.237349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.804 [2024-12-12 10:09:44.237415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.804 [2024-12-12 10:09:44.237430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.804 #40 NEW cov: 12411 ft: 14479 corp: 17/687b lim: 50 exec/s: 40 rss: 74Mb L: 44/48 MS: 1 CrossOver- 00:07:30.804 [2024-12-12 10:09:44.277304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926571370325542 len:9767 00:07:30.805 [2024-12-12 10:09:44.277334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.277385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.805 [2024-12-12 10:09:44.277401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.277451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913582 len:23645 00:07:30.805 [2024-12-12 10:09:44.277465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.277515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926568756422182 len:9767 00:07:30.805 [2024-12-12 10:09:44.277529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.805 #45 NEW cov: 12411 ft: 14494 corp: 18/728b lim: 50 exec/s: 45 rss: 74Mb L: 41/48 MS: 5 ShuffleBytes-ChangeByte-CopyPart-InsertByte-CrossOver- 00:07:30.805 [2024-12-12 10:09:44.317442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:30.805 [2024-12-12 10:09:44.317470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.317506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:30.805 [2024-12-12 10:09:44.317520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.317570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:30.805 [2024-12-12 10:09:44.317585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.317637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:30.805 [2024-12-12 10:09:44.317652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:30.805 #46 NEW cov: 12411 ft: 14587 corp: 19/776b lim: 50 exec/s: 46 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:07:30.805 [2024-12-12 10:09:44.357317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926546439185958 len:9775 00:07:30.805 [2024-12-12 10:09:44.357345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.357385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2749115683846759974 len:9767 00:07:30.805 [2024-12-12 10:09:44.357400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.805 #47 NEW cov: 12411 ft: 14606 corp: 20/800b lim: 50 exec/s: 47 rss: 74Mb L: 24/48 MS: 1 ChangeByte- 00:07:30.805 [2024-12-12 10:09:44.417516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:07:30.805 [2024-12-12 10:09:44.417543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:30.805 [2024-12-12 10:09:44.417576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:30.805 [2024-12-12 10:09:44.417591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:30.805 #48 NEW cov: 12411 ft: 14629 corp: 21/825b lim: 50 exec/s: 48 rss: 74Mb L: 25/48 MS: 1 InsertRepeatedBytes- 00:07:31.064 [2024-12-12 10:09:44.457839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926571487569446 len:9767 00:07:31.064 [2024-12-12 10:09:44.457866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.457912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2749042016567830054 len:9767 00:07:31.064 [2024-12-12 10:09:44.457926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.457976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:31.064 [2024-12-12 10:09:44.457991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.458046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2756244917241390630 len:9767 00:07:31.064 [2024-12-12 10:09:44.458061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.064 #49 NEW cov: 12411 ft: 14656 corp: 22/874b lim: 50 exec/s: 49 rss: 75Mb L: 49/49 MS: 1 InsertByte- 00:07:31.064 [2024-12-12 10:09:44.517840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:10791 00:07:31.064 [2024-12-12 10:09:44.517867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.064 #50 NEW cov: 12411 ft: 14954 corp: 23/888b lim: 50 exec/s: 50 rss: 75Mb L: 14/49 MS: 1 CrossOver- 00:07:31.064 [2024-12-12 10:09:44.558242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:193273528320 len:9767 00:07:31.064 [2024-12-12 10:09:44.558269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.558312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9767 00:07:31.064 [2024-12-12 10:09:44.558327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.558378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567981131302 len:9767 00:07:31.064 [2024-12-12 10:09:44.558393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.558445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913600 len:9767 00:07:31.064 [2024-12-12 10:09:44.558460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.064 #51 NEW cov: 12411 ft: 14971 corp: 24/933b lim: 50 exec/s: 51 rss: 75Mb L: 45/49 MS: 1 ChangeBit- 00:07:31.064 [2024-12-12 10:09:44.618421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.064 [2024-12-12 10:09:44.618449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.618493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:31.064 [2024-12-12 10:09:44.618508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.618559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3109214538037077540 len:9767 00:07:31.064 [2024-12-12 10:09:44.618594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.618646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:31.064 [2024-12-12 10:09:44.618661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.064 #52 NEW cov: 12411 ft: 15003 corp: 25/978b lim: 50 exec/s: 52 rss: 75Mb L: 45/49 MS: 1 InsertByte- 00:07:31.064 [2024-12-12 10:09:44.658503] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.064 [2024-12-12 10:09:44.658530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.658574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748927499854816806 len:65282 00:07:31.064 [2024-12-12 10:09:44.658589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.658639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15697817502221457883 len:9767 00:07:31.064 [2024-12-12 10:09:44.658654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.064 [2024-12-12 10:09:44.658706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:31.064 [2024-12-12 10:09:44.658726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.064 #53 NEW cov: 12411 ft: 15026 corp: 26/1022b lim: 50 exec/s: 53 rss: 75Mb L: 44/49 MS: 1 CMP- DE: "\377\377\001\000"- 00:07:31.324 [2024-12-12 10:09:44.718722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.324 [2024-12-12 10:09:44.718750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.718794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:31.324 [2024-12-12 10:09:44.718809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.718857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:31.324 [2024-12-12 10:09:44.718873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.718923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9788 00:07:31.324 [2024-12-12 10:09:44.718938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.324 #54 NEW cov: 12411 ft: 15034 corp: 27/1067b lim: 50 exec/s: 54 rss: 75Mb L: 45/49 MS: 1 InsertByte- 00:07:31.324 [2024-12-12 10:09:44.758805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.324 [2024-12-12 10:09:44.758832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.758876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:31.324 [2024-12-12 10:09:44.758891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.758945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:31.324 [2024-12-12 10:09:44.758960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.759013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9788 00:07:31.324 [2024-12-12 10:09:44.759028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.324 #55 NEW cov: 12411 ft: 15070 corp: 28/1112b lim: 50 exec/s: 55 rss: 75Mb L: 45/49 MS: 1 ChangeByte- 00:07:31.324 [2024-12-12 10:09:44.818783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:731313913377203750 len:9775 00:07:31.324 [2024-12-12 10:09:44.818810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.818844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2749115683846759974 len:9767 00:07:31.324 [2024-12-12 10:09:44.818859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.324 #56 NEW cov: 12411 ft: 15086 corp: 29/1136b lim: 50 exec/s: 56 rss: 75Mb L: 24/49 MS: 1 CrossOver- 00:07:31.324 [2024-12-12 10:09:44.879180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.324 [2024-12-12 10:09:44.879207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.879245] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:31.324 [2024-12-12 10:09:44.879260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.879311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:31.324 [2024-12-12 10:09:44.879326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.879394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:31.324 [2024-12-12 10:09:44.879410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.324 #57 NEW cov: 12411 ft: 15087 corp: 30/1181b lim: 50 exec/s: 57 rss: 75Mb L: 45/49 MS: 1 CrossOver- 00:07:31.324 [2024-12-12 10:09:44.919253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.324 [2024-12-12 10:09:44.919280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.919324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:31.324 [2024-12-12 10:09:44.919339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.919388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:31.324 [2024-12-12 10:09:44.919404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.324 [2024-12-12 10:09:44.919455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:31.324 [2024-12-12 10:09:44.919472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.324 #58 NEW cov: 12411 ft: 15144 corp: 31/1229b lim: 50 exec/s: 58 rss: 75Mb L: 48/49 MS: 1 CopyPart- 00:07:31.589 [2024-12-12 10:09:44.979348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:44.979375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:44.979409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15698380452800879313 len:55770 00:07:31.589 [2024-12-12 10:09:44.979423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:44.979473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:44.979487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.589 #59 NEW cov: 12411 ft: 15356 corp: 32/1265b lim: 50 exec/s: 59 rss: 75Mb L: 36/49 MS: 1 EraseBytes- 00:07:31.589 [2024-12-12 10:09:45.019547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:45.019574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.019614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:31.589 [2024-12-12 10:09:45.019629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.019680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15697816737079744987 len:9767 00:07:31.589 [2024-12-12 10:09:45.019695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.019748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567858644518 len:9767 00:07:31.589 [2024-12-12 10:09:45.019763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.589 #60 NEW cov: 12411 ft: 15394 corp: 33/1313b lim: 50 exec/s: 60 rss: 75Mb L: 48/49 MS: 1 CrossOver- 00:07:31.589 [2024-12-12 10:09:45.059543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:45.059569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.059604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15698380452800879313 len:55770 00:07:31.589 [2024-12-12 10:09:45.059619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.059669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:45.059685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.119728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070054618662 len:349 00:07:31.589 [2024-12-12 10:09:45.119755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.119802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15698380452412119761 len:55770 00:07:31.589 [2024-12-12 10:09:45.119820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.119871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:45.119886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.589 #62 NEW cov: 12411 ft: 15412 corp: 34/1349b lim: 50 exec/s: 62 rss: 75Mb L: 36/49 MS: 2 CrossOver-CMP- DE: "\377\377\377\377\001\\\016\365"- 00:07:31.589 [2024-12-12 10:09:45.159894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:45.159921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.159971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2748926567846913574 len:9762 00:07:31.589 [2024-12-12 10:09:45.159985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.160035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926567847437862 len:9767 00:07:31.589 [2024-12-12 10:09:45.160065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.589 [2024-12-12 10:09:45.160115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2748926567846913574 len:9767 00:07:31.589 [2024-12-12 10:09:45.160129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:31.589 #63 NEW cov: 12411 ft: 15425 corp: 35/1397b lim: 50 exec/s: 63 rss: 75Mb L: 48/49 MS: 1 CrossOver- 00:07:31.589 [2024-12-12 10:09:45.219998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070054618662 len:349 00:07:31.590 [2024-12-12 10:09:45.220024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:31.590 [2024-12-12 10:09:45.220084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17715702814390756902 len:56282 00:07:31.590 [2024-12-12 10:09:45.220099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:31.590 [2024-12-12 10:09:45.220159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2748926570861812006 len:9767 00:07:31.590 [2024-12-12 10:09:45.220174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:31.855 #64 pulse cov: 12411 ft: 15466 corp: 35/1397b lim: 50 exec/s: 32 rss: 75Mb 00:07:31.855 #64 NEW cov: 12411 ft: 15466 corp: 36/1436b lim: 50 exec/s: 32 rss: 75Mb L: 39/49 MS: 1 CrossOver- 00:07:31.855 #64 DONE cov: 12411 ft: 15466 corp: 36/1436b lim: 50 exec/s: 32 rss: 75Mb 00:07:31.855 ###### Recommended dictionary. ###### 00:07:31.855 "\377\377\001\000" # Uses: 0 00:07:31.855 "\377\377\377\377\001\\\016\365" # Uses: 0 00:07:31.855 ###### End of recommended dictionary. ###### 00:07:31.855 Done 64 runs in 2 second(s) 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.855 10:09:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:31.855 [2024-12-12 10:09:45.415931] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:31.855 [2024-12-12 10:09:45.416007] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478316 ] 00:07:32.114 [2024-12-12 10:09:45.696381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.373 [2024-12-12 10:09:45.757488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.373 [2024-12-12 10:09:45.816977] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.373 [2024-12-12 10:09:45.833309] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:32.373 INFO: Running with entropic power schedule (0xFF, 100). 00:07:32.373 INFO: Seed: 2214685432 00:07:32.373 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:32.373 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:32.373 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:32.373 INFO: A corpus is not provided, starting from an empty corpus 00:07:32.373 #2 INITED exec/s: 0 rss: 65Mb 00:07:32.373 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:32.373 This may also happen if the target rejected all inputs we tried so far 00:07:32.373 [2024-12-12 10:09:45.898935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.373 [2024-12-12 10:09:45.898967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.373 [2024-12-12 10:09:45.899029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.373 [2024-12-12 10:09:45.899046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.373 [2024-12-12 10:09:45.899105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.373 [2024-12-12 10:09:45.899121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.632 NEW_FUNC[1/718]: 0x45df18 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:32.632 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:32.632 #13 NEW cov: 12224 ft: 12225 corp: 2/68b lim: 90 exec/s: 0 rss: 72Mb L: 67/67 MS: 1 InsertRepeatedBytes- 00:07:32.632 [2024-12-12 10:09:46.239661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.632 [2024-12-12 10:09:46.239710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.632 [2024-12-12 10:09:46.239792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.632 [2024-12-12 10:09:46.239816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.891 #14 NEW cov: 12354 ft: 13166 corp: 3/115b lim: 90 exec/s: 0 rss: 73Mb L: 47/67 MS: 1 EraseBytes- 00:07:32.891 [2024-12-12 10:09:46.299832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.891 [2024-12-12 10:09:46.299861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.891 [2024-12-12 10:09:46.299901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.891 [2024-12-12 10:09:46.299918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.891 [2024-12-12 10:09:46.299974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.891 [2024-12-12 10:09:46.299990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.891 #20 NEW cov: 12360 ft: 13393 corp: 4/182b lim: 90 exec/s: 0 rss: 73Mb L: 67/67 MS: 1 ChangeByte- 00:07:32.891 [2024-12-12 10:09:46.339931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.891 [2024-12-12 10:09:46.339958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.891 [2024-12-12 10:09:46.339996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.891 [2024-12-12 10:09:46.340011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.891 [2024-12-12 10:09:46.340067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.891 [2024-12-12 10:09:46.340082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.891 #21 NEW cov: 12445 ft: 13690 corp: 5/249b lim: 90 exec/s: 0 rss: 73Mb L: 67/67 MS: 1 ChangeBinInt- 00:07:32.892 [2024-12-12 10:09:46.400086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.892 [2024-12-12 10:09:46.400113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.400154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.892 [2024-12-12 10:09:46.400169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.400224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.892 [2024-12-12 10:09:46.400240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.892 #22 NEW cov: 12445 ft: 13797 corp: 6/316b lim: 90 exec/s: 0 rss: 73Mb L: 67/67 MS: 1 ShuffleBytes- 00:07:32.892 [2024-12-12 10:09:46.460395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.892 [2024-12-12 10:09:46.460425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.460464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.892 [2024-12-12 10:09:46.460479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.460534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.892 [2024-12-12 10:09:46.460550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.460607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.892 [2024-12-12 10:09:46.460621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:32.892 #23 NEW cov: 12445 ft: 14235 corp: 7/393b lim: 90 exec/s: 0 rss: 73Mb L: 77/77 MS: 1 InsertRepeatedBytes- 00:07:32.892 [2024-12-12 10:09:46.500508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:32.892 [2024-12-12 10:09:46.500535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.500581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:32.892 [2024-12-12 10:09:46.500597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.500653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:32.892 [2024-12-12 10:09:46.500685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:32.892 [2024-12-12 10:09:46.500742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:32.892 [2024-12-12 10:09:46.500758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.151 #24 NEW cov: 12445 ft: 14307 corp: 8/470b lim: 90 exec/s: 0 rss: 73Mb L: 77/77 MS: 1 ShuffleBytes- 00:07:33.151 [2024-12-12 10:09:46.560678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.151 [2024-12-12 10:09:46.560704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.560757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.151 [2024-12-12 10:09:46.560773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.560830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.151 [2024-12-12 10:09:46.560845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.560899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.151 [2024-12-12 10:09:46.560915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.151 #25 NEW cov: 12445 ft: 14395 corp: 9/548b lim: 90 exec/s: 0 rss: 73Mb L: 78/78 MS: 1 InsertByte- 00:07:33.151 [2024-12-12 10:09:46.600776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.151 [2024-12-12 10:09:46.600803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.600871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.151 [2024-12-12 10:09:46.600890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.600944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.151 [2024-12-12 10:09:46.600959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.601015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.151 [2024-12-12 10:09:46.601030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.151 #26 NEW cov: 12445 ft: 14408 corp: 10/627b lim: 90 exec/s: 0 rss: 73Mb L: 79/79 MS: 1 InsertByte- 00:07:33.151 [2024-12-12 10:09:46.660545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.151 [2024-12-12 10:09:46.660572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.151 #27 NEW cov: 12445 ft: 15187 corp: 11/662b lim: 90 exec/s: 0 rss: 73Mb L: 35/79 MS: 1 InsertRepeatedBytes- 00:07:33.151 [2024-12-12 10:09:46.700762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.151 [2024-12-12 10:09:46.700789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.700842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.151 [2024-12-12 10:09:46.700859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.151 #28 NEW cov: 12445 ft: 15211 corp: 12/709b lim: 90 exec/s: 0 rss: 73Mb L: 47/79 MS: 1 ShuffleBytes- 00:07:33.151 [2024-12-12 10:09:46.761299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.151 [2024-12-12 10:09:46.761327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.761365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.151 [2024-12-12 10:09:46.761380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.761436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.151 [2024-12-12 10:09:46.761467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.151 [2024-12-12 10:09:46.761524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.151 [2024-12-12 10:09:46.761540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.411 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:33.411 #29 NEW cov: 12468 ft: 15257 corp: 13/786b lim: 90 exec/s: 0 rss: 74Mb L: 77/79 MS: 1 ChangeByte- 00:07:33.411 [2024-12-12 10:09:46.821272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.411 [2024-12-12 10:09:46.821299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.821338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.411 [2024-12-12 10:09:46.821353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.821411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.411 [2024-12-12 10:09:46.821429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.411 #30 NEW cov: 12468 ft: 15271 corp: 14/855b lim: 90 exec/s: 0 rss: 74Mb L: 69/79 MS: 1 EraseBytes- 00:07:33.411 [2024-12-12 10:09:46.861527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.411 [2024-12-12 10:09:46.861554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.861601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.411 [2024-12-12 10:09:46.861617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.861675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.411 [2024-12-12 10:09:46.861691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.861749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.411 [2024-12-12 10:09:46.861765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.411 #31 NEW cov: 12468 ft: 15295 corp: 15/933b lim: 90 exec/s: 31 rss: 74Mb L: 78/79 MS: 1 InsertByte- 00:07:33.411 [2024-12-12 10:09:46.901474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.411 [2024-12-12 10:09:46.901501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.901545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.411 [2024-12-12 10:09:46.901560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.901618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.411 [2024-12-12 10:09:46.901634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.411 #32 NEW cov: 12468 ft: 15301 corp: 16/1000b lim: 90 exec/s: 32 rss: 74Mb L: 67/79 MS: 1 CopyPart- 00:07:33.411 [2024-12-12 10:09:46.941781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.411 [2024-12-12 10:09:46.941807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.941857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.411 [2024-12-12 10:09:46.941873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.941928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.411 [2024-12-12 10:09:46.941944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.941998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.411 [2024-12-12 10:09:46.942014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.411 #33 NEW cov: 12468 ft: 15318 corp: 17/1078b lim: 90 exec/s: 33 rss: 74Mb L: 78/79 MS: 1 InsertByte- 00:07:33.411 [2024-12-12 10:09:46.981710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.411 [2024-12-12 10:09:46.981743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.981786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.411 [2024-12-12 10:09:46.981801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:46.981858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.411 [2024-12-12 10:09:46.981873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.411 #34 NEW cov: 12468 ft: 15330 corp: 18/1136b lim: 90 exec/s: 34 rss: 74Mb L: 58/79 MS: 1 CopyPart- 00:07:33.411 [2024-12-12 10:09:47.022020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.411 [2024-12-12 10:09:47.022047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.411 [2024-12-12 10:09:47.022093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.411 [2024-12-12 10:09:47.022108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.412 [2024-12-12 10:09:47.022180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.412 [2024-12-12 10:09:47.022196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.412 [2024-12-12 10:09:47.022252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.412 [2024-12-12 10:09:47.022267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.412 #35 NEW cov: 12468 ft: 15387 corp: 19/1215b lim: 90 exec/s: 35 rss: 74Mb L: 79/79 MS: 1 CrossOver- 00:07:33.671 [2024-12-12 10:09:47.062083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.671 [2024-12-12 10:09:47.062110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.062158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.671 [2024-12-12 10:09:47.062174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.062230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.671 [2024-12-12 10:09:47.062246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.062303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.671 [2024-12-12 10:09:47.062318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.671 #36 NEW cov: 12468 ft: 15419 corp: 20/1303b lim: 90 exec/s: 36 rss: 74Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:07:33.671 [2024-12-12 10:09:47.122127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.671 [2024-12-12 10:09:47.122153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.122197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.671 [2024-12-12 10:09:47.122213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.122270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.671 [2024-12-12 10:09:47.122288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.671 #37 NEW cov: 12468 ft: 15439 corp: 21/1372b lim: 90 exec/s: 37 rss: 74Mb L: 69/88 MS: 1 ChangeBinInt- 00:07:33.671 [2024-12-12 10:09:47.182288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.671 [2024-12-12 10:09:47.182315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.182356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.671 [2024-12-12 10:09:47.182372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.182428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.671 [2024-12-12 10:09:47.182444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.671 #38 NEW cov: 12468 ft: 15446 corp: 22/1439b lim: 90 exec/s: 38 rss: 74Mb L: 67/88 MS: 1 ShuffleBytes- 00:07:33.671 [2024-12-12 10:09:47.222218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.671 [2024-12-12 10:09:47.222244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.222285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.671 [2024-12-12 10:09:47.222301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.671 #39 NEW cov: 12468 ft: 15595 corp: 23/1490b lim: 90 exec/s: 39 rss: 74Mb L: 51/88 MS: 1 EraseBytes- 00:07:33.671 [2024-12-12 10:09:47.282702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.671 [2024-12-12 10:09:47.282735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.282790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.671 [2024-12-12 10:09:47.282805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.282877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.671 [2024-12-12 10:09:47.282892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.671 [2024-12-12 10:09:47.282948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.671 [2024-12-12 10:09:47.282962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.671 #40 NEW cov: 12468 ft: 15608 corp: 24/1568b lim: 90 exec/s: 40 rss: 74Mb L: 78/88 MS: 1 ChangeBit- 00:07:33.931 [2024-12-12 10:09:47.322816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.931 [2024-12-12 10:09:47.322843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.322890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.931 [2024-12-12 10:09:47.322905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.322961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.931 [2024-12-12 10:09:47.322977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.323037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.931 [2024-12-12 10:09:47.323053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.931 #41 NEW cov: 12468 ft: 15649 corp: 25/1650b lim: 90 exec/s: 41 rss: 74Mb L: 82/88 MS: 1 CopyPart- 00:07:33.931 [2024-12-12 10:09:47.382995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.931 [2024-12-12 10:09:47.383022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.383071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.931 [2024-12-12 10:09:47.383087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.383139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.931 [2024-12-12 10:09:47.383156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.383211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.931 [2024-12-12 10:09:47.383227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.931 #42 NEW cov: 12468 ft: 15655 corp: 26/1732b lim: 90 exec/s: 42 rss: 74Mb L: 82/88 MS: 1 CopyPart- 00:07:33.931 [2024-12-12 10:09:47.443171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.931 [2024-12-12 10:09:47.443199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.443246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.931 [2024-12-12 10:09:47.443261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.443317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.931 [2024-12-12 10:09:47.443333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.443389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.931 [2024-12-12 10:09:47.443404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:33.931 #43 NEW cov: 12468 ft: 15691 corp: 27/1818b lim: 90 exec/s: 43 rss: 74Mb L: 86/88 MS: 1 InsertRepeatedBytes- 00:07:33.931 [2024-12-12 10:09:47.482985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.931 [2024-12-12 10:09:47.483013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.483053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.931 [2024-12-12 10:09:47.483069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.931 #44 NEW cov: 12468 ft: 15709 corp: 28/1869b lim: 90 exec/s: 44 rss: 74Mb L: 51/88 MS: 1 CrossOver- 00:07:33.931 [2024-12-12 10:09:47.523042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.931 [2024-12-12 10:09:47.523069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.523107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.931 [2024-12-12 10:09:47.523126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.931 #45 NEW cov: 12468 ft: 15730 corp: 29/1907b lim: 90 exec/s: 45 rss: 74Mb L: 38/88 MS: 1 EraseBytes- 00:07:33.931 [2024-12-12 10:09:47.563499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:33.931 [2024-12-12 10:09:47.563526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.563576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:33.931 [2024-12-12 10:09:47.563592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.563662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:33.931 [2024-12-12 10:09:47.563678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:33.931 [2024-12-12 10:09:47.563738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:33.932 [2024-12-12 10:09:47.563754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.191 #46 NEW cov: 12468 ft: 15735 corp: 30/1994b lim: 90 exec/s: 46 rss: 74Mb L: 87/88 MS: 1 CopyPart- 00:07:34.191 [2024-12-12 10:09:47.623679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:34.191 [2024-12-12 10:09:47.623706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.623773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:34.191 [2024-12-12 10:09:47.623787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.623842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:34.191 [2024-12-12 10:09:47.623856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.623912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:34.191 [2024-12-12 10:09:47.623927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.191 #47 NEW cov: 12468 ft: 15794 corp: 31/2071b lim: 90 exec/s: 47 rss: 74Mb L: 77/88 MS: 1 ChangeByte- 00:07:34.191 [2024-12-12 10:09:47.663800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:34.191 [2024-12-12 10:09:47.663827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.663876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:34.191 [2024-12-12 10:09:47.663891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.663947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:34.191 [2024-12-12 10:09:47.663962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.664019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:34.191 [2024-12-12 10:09:47.664034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.191 #48 NEW cov: 12468 ft: 15841 corp: 32/2153b lim: 90 exec/s: 48 rss: 74Mb L: 82/88 MS: 1 ChangeByte- 00:07:34.191 [2024-12-12 10:09:47.703738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:34.191 [2024-12-12 10:09:47.703765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.703818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:34.191 [2024-12-12 10:09:47.703835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.703893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:34.191 [2024-12-12 10:09:47.703909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.191 #49 NEW cov: 12468 ft: 15851 corp: 33/2212b lim: 90 exec/s: 49 rss: 74Mb L: 59/88 MS: 1 InsertByte- 00:07:34.191 [2024-12-12 10:09:47.743999] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:34.191 [2024-12-12 10:09:47.744027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.744075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:34.191 [2024-12-12 10:09:47.744091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.744146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:34.191 [2024-12-12 10:09:47.744162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.744220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:34.191 [2024-12-12 10:09:47.744236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.191 #50 NEW cov: 12468 ft: 15866 corp: 34/2297b lim: 90 exec/s: 50 rss: 74Mb L: 85/88 MS: 1 CopyPart- 00:07:34.191 [2024-12-12 10:09:47.804191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:34.191 [2024-12-12 10:09:47.804219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.804267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:34.191 [2024-12-12 10:09:47.804283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.804339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:34.191 [2024-12-12 10:09:47.804355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:34.191 [2024-12-12 10:09:47.804411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:34.191 [2024-12-12 10:09:47.804427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:34.451 #51 NEW cov: 12468 ft: 15922 corp: 35/2384b lim: 90 exec/s: 51 rss: 75Mb L: 87/88 MS: 1 ChangeBit- 00:07:34.451 [2024-12-12 10:09:47.863898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:34.451 [2024-12-12 10:09:47.863926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.451 #52 NEW cov: 12468 ft: 15937 corp: 36/2412b lim: 90 exec/s: 26 rss: 75Mb L: 28/88 MS: 1 CrossOver- 00:07:34.451 #52 DONE cov: 12468 ft: 15937 corp: 36/2412b lim: 90 exec/s: 26 rss: 75Mb 00:07:34.451 Done 52 runs in 2 second(s) 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:34.451 10:09:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:34.451 [2024-12-12 10:09:48.058613] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:34.451 [2024-12-12 10:09:48.058688] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478856 ] 00:07:34.710 [2024-12-12 10:09:48.331894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.968 [2024-12-12 10:09:48.384148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.968 [2024-12-12 10:09:48.443005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.968 [2024-12-12 10:09:48.459338] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:34.968 INFO: Running with entropic power schedule (0xFF, 100). 00:07:34.968 INFO: Seed: 543715085 00:07:34.968 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:34.968 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:34.968 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:34.968 INFO: A corpus is not provided, starting from an empty corpus 00:07:34.968 #2 INITED exec/s: 0 rss: 65Mb 00:07:34.968 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:34.968 This may also happen if the target rejected all inputs we tried so far 00:07:34.968 [2024-12-12 10:09:48.518515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:34.968 [2024-12-12 10:09:48.518544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:34.968 [2024-12-12 10:09:48.518600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:34.968 [2024-12-12 10:09:48.518621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:34.968 [2024-12-12 10:09:48.518678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:34.968 [2024-12-12 10:09:48.518694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.227 NEW_FUNC[1/718]: 0x461148 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:35.227 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:35.227 #3 NEW cov: 12213 ft: 12217 corp: 2/38b lim: 50 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:35.227 [2024-12-12 10:09:48.849783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.227 [2024-12-12 10:09:48.849840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.227 [2024-12-12 10:09:48.849941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.227 [2024-12-12 10:09:48.849971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.227 [2024-12-12 10:09:48.850057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.227 [2024-12-12 10:09:48.850087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.486 #4 NEW cov: 12330 ft: 12949 corp: 3/76b lim: 50 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 InsertByte- 00:07:35.486 [2024-12-12 10:09:48.919533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.486 [2024-12-12 10:09:48.919560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.486 [2024-12-12 10:09:48.919601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.486 [2024-12-12 10:09:48.919617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.486 [2024-12-12 10:09:48.919679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.486 [2024-12-12 10:09:48.919695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.486 #5 NEW cov: 12336 ft: 13206 corp: 4/114b lim: 50 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 ChangeBinInt- 00:07:35.486 [2024-12-12 10:09:48.979879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.486 [2024-12-12 10:09:48.979908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.486 [2024-12-12 10:09:48.979956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.487 [2024-12-12 10:09:48.979973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:48.980032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.487 [2024-12-12 10:09:48.980048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:48.980106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.487 [2024-12-12 10:09:48.980120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.487 #6 NEW cov: 12421 ft: 13786 corp: 5/155b lim: 50 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:35.487 [2024-12-12 10:09:49.019979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.487 [2024-12-12 10:09:49.020006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.020056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.487 [2024-12-12 10:09:49.020073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.020131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.487 [2024-12-12 10:09:49.020147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.020206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.487 [2024-12-12 10:09:49.020222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.487 #7 NEW cov: 12421 ft: 13828 corp: 6/195b lim: 50 exec/s: 0 rss: 73Mb L: 40/41 MS: 1 InsertRepeatedBytes- 00:07:35.487 [2024-12-12 10:09:49.059908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.487 [2024-12-12 10:09:49.059936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.059978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.487 [2024-12-12 10:09:49.059993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.060053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.487 [2024-12-12 10:09:49.060069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.487 #8 NEW cov: 12421 ft: 13960 corp: 7/232b lim: 50 exec/s: 0 rss: 73Mb L: 37/41 MS: 1 ChangeByte- 00:07:35.487 [2024-12-12 10:09:49.100211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.487 [2024-12-12 10:09:49.100238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.100301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.487 [2024-12-12 10:09:49.100316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.100373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.487 [2024-12-12 10:09:49.100390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.487 [2024-12-12 10:09:49.100450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.487 [2024-12-12 10:09:49.100465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.746 #9 NEW cov: 12421 ft: 14125 corp: 8/276b lim: 50 exec/s: 0 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:07:35.746 [2024-12-12 10:09:49.160228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.746 [2024-12-12 10:09:49.160255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.160299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.746 [2024-12-12 10:09:49.160315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.160378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.746 [2024-12-12 10:09:49.160395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.746 #10 NEW cov: 12421 ft: 14264 corp: 9/313b lim: 50 exec/s: 0 rss: 73Mb L: 37/44 MS: 1 ShuffleBytes- 00:07:35.746 [2024-12-12 10:09:49.220521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.746 [2024-12-12 10:09:49.220548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.220597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.746 [2024-12-12 10:09:49.220614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.220674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.746 [2024-12-12 10:09:49.220688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.220750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:35.746 [2024-12-12 10:09:49.220766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:35.746 #11 NEW cov: 12421 ft: 14289 corp: 10/353b lim: 50 exec/s: 0 rss: 73Mb L: 40/44 MS: 1 ChangeBinInt- 00:07:35.746 [2024-12-12 10:09:49.280530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.746 [2024-12-12 10:09:49.280558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.280600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.746 [2024-12-12 10:09:49.280615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.280676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.746 [2024-12-12 10:09:49.280692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:35.746 #12 NEW cov: 12421 ft: 14427 corp: 11/392b lim: 50 exec/s: 0 rss: 73Mb L: 39/44 MS: 1 InsertByte- 00:07:35.746 [2024-12-12 10:09:49.320319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.746 [2024-12-12 10:09:49.320346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.746 #14 NEW cov: 12421 ft: 15220 corp: 12/407b lim: 50 exec/s: 0 rss: 73Mb L: 15/44 MS: 2 InsertByte-CrossOver- 00:07:35.746 [2024-12-12 10:09:49.360733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:35.746 [2024-12-12 10:09:49.360760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.360806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:35.746 [2024-12-12 10:09:49.360823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:35.746 [2024-12-12 10:09:49.360899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:35.746 [2024-12-12 10:09:49.360916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.006 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:36.006 #15 NEW cov: 12444 ft: 15282 corp: 13/444b lim: 50 exec/s: 0 rss: 73Mb L: 37/44 MS: 1 ChangeBit- 00:07:36.006 [2024-12-12 10:09:49.420922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.006 [2024-12-12 10:09:49.420950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.420991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.006 [2024-12-12 10:09:49.421006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.421066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.006 [2024-12-12 10:09:49.421083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.006 #16 NEW cov: 12444 ft: 15297 corp: 14/476b lim: 50 exec/s: 0 rss: 74Mb L: 32/44 MS: 1 InsertRepeatedBytes- 00:07:36.006 [2024-12-12 10:09:49.481259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.006 [2024-12-12 10:09:49.481292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.481328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.006 [2024-12-12 10:09:49.481344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.481406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.006 [2024-12-12 10:09:49.481422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.481481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.006 [2024-12-12 10:09:49.481496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.006 #17 NEW cov: 12444 ft: 15318 corp: 15/518b lim: 50 exec/s: 17 rss: 74Mb L: 42/44 MS: 1 CopyPart- 00:07:36.006 [2024-12-12 10:09:49.541260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.006 [2024-12-12 10:09:49.541287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.541334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.006 [2024-12-12 10:09:49.541349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.541408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.006 [2024-12-12 10:09:49.541422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.006 #18 NEW cov: 12444 ft: 15330 corp: 16/556b lim: 50 exec/s: 18 rss: 74Mb L: 38/44 MS: 1 InsertByte- 00:07:36.006 [2024-12-12 10:09:49.601419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.006 [2024-12-12 10:09:49.601446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.601493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.006 [2024-12-12 10:09:49.601509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.006 [2024-12-12 10:09:49.601571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.006 [2024-12-12 10:09:49.601589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.006 #19 NEW cov: 12444 ft: 15347 corp: 17/594b lim: 50 exec/s: 19 rss: 74Mb L: 38/44 MS: 1 ChangeByte- 00:07:36.266 [2024-12-12 10:09:49.661750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.266 [2024-12-12 10:09:49.661778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.661831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.266 [2024-12-12 10:09:49.661846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.661918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.266 [2024-12-12 10:09:49.661934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.661992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.266 [2024-12-12 10:09:49.662007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.266 #20 NEW cov: 12444 ft: 15358 corp: 18/643b lim: 50 exec/s: 20 rss: 74Mb L: 49/49 MS: 1 CMP- DE: "\266;\335\277\024=\002\000"- 00:07:36.266 [2024-12-12 10:09:49.701988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.266 [2024-12-12 10:09:49.702016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.702074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.266 [2024-12-12 10:09:49.702089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.702145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.266 [2024-12-12 10:09:49.702159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.702216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.266 [2024-12-12 10:09:49.702232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.702288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:36.266 [2024-12-12 10:09:49.702304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:36.266 #21 NEW cov: 12444 ft: 15423 corp: 19/693b lim: 50 exec/s: 21 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:07:36.266 [2024-12-12 10:09:49.741979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.266 [2024-12-12 10:09:49.742006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.742064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.266 [2024-12-12 10:09:49.742079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.742135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.266 [2024-12-12 10:09:49.742150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.742209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.266 [2024-12-12 10:09:49.742224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.266 #22 NEW cov: 12444 ft: 15432 corp: 20/733b lim: 50 exec/s: 22 rss: 74Mb L: 40/50 MS: 1 ChangeBinInt- 00:07:36.266 [2024-12-12 10:09:49.781741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.266 [2024-12-12 10:09:49.781768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.781813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.266 [2024-12-12 10:09:49.781829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.266 #23 NEW cov: 12444 ft: 15687 corp: 21/753b lim: 50 exec/s: 23 rss: 74Mb L: 20/50 MS: 1 CrossOver- 00:07:36.266 [2024-12-12 10:09:49.822002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.266 [2024-12-12 10:09:49.822028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.822079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.266 [2024-12-12 10:09:49.822095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.822155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.266 [2024-12-12 10:09:49.822170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.266 #24 NEW cov: 12444 ft: 15693 corp: 22/791b lim: 50 exec/s: 24 rss: 74Mb L: 38/50 MS: 1 ChangeBinInt- 00:07:36.266 [2024-12-12 10:09:49.862155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.266 [2024-12-12 10:09:49.862181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.862218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.266 [2024-12-12 10:09:49.862233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.862291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.266 [2024-12-12 10:09:49.862306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.266 #25 NEW cov: 12444 ft: 15708 corp: 23/828b lim: 50 exec/s: 25 rss: 74Mb L: 37/50 MS: 1 ChangeBit- 00:07:36.266 [2024-12-12 10:09:49.902271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.266 [2024-12-12 10:09:49.902299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.902336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.266 [2024-12-12 10:09:49.902352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.266 [2024-12-12 10:09:49.902412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.266 [2024-12-12 10:09:49.902428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.525 #26 NEW cov: 12444 ft: 15736 corp: 24/860b lim: 50 exec/s: 26 rss: 74Mb L: 32/50 MS: 1 EraseBytes- 00:07:36.525 [2024-12-12 10:09:49.962455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.525 [2024-12-12 10:09:49.962482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:49.962530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.525 [2024-12-12 10:09:49.962547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:49.962606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.525 [2024-12-12 10:09:49.962623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.525 #27 NEW cov: 12444 ft: 15748 corp: 25/899b lim: 50 exec/s: 27 rss: 74Mb L: 39/50 MS: 1 InsertByte- 00:07:36.525 [2024-12-12 10:09:50.002891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.525 [2024-12-12 10:09:50.002919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.002976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.525 [2024-12-12 10:09:50.002993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.003050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.525 [2024-12-12 10:09:50.003067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.003126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.525 [2024-12-12 10:09:50.003141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.003200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:07:36.525 [2024-12-12 10:09:50.003216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:36.525 #28 NEW cov: 12444 ft: 15782 corp: 26/949b lim: 50 exec/s: 28 rss: 74Mb L: 50/50 MS: 1 ChangeByte- 00:07:36.525 [2024-12-12 10:09:50.042657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.525 [2024-12-12 10:09:50.042684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.042737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.525 [2024-12-12 10:09:50.042754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.042814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.525 [2024-12-12 10:09:50.042831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.525 #29 NEW cov: 12444 ft: 15811 corp: 27/986b lim: 50 exec/s: 29 rss: 74Mb L: 37/50 MS: 1 ChangeBinInt- 00:07:36.525 [2024-12-12 10:09:50.082930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.525 [2024-12-12 10:09:50.082959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.083013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.525 [2024-12-12 10:09:50.083029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.083089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.525 [2024-12-12 10:09:50.083106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.083163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.525 [2024-12-12 10:09:50.083179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.525 #30 NEW cov: 12444 ft: 15814 corp: 28/1031b lim: 50 exec/s: 30 rss: 74Mb L: 45/50 MS: 1 PersAutoDict- DE: "\266;\335\277\024=\002\000"- 00:07:36.525 [2024-12-12 10:09:50.143143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.525 [2024-12-12 10:09:50.143171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.143224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.525 [2024-12-12 10:09:50.143240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.143297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.525 [2024-12-12 10:09:50.143313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.525 [2024-12-12 10:09:50.143369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.525 [2024-12-12 10:09:50.143385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.783 #31 NEW cov: 12444 ft: 15849 corp: 29/1076b lim: 50 exec/s: 31 rss: 74Mb L: 45/50 MS: 1 InsertRepeatedBytes- 00:07:36.783 [2024-12-12 10:09:50.203307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.783 [2024-12-12 10:09:50.203334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.203388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.783 [2024-12-12 10:09:50.203404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.203463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.783 [2024-12-12 10:09:50.203479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.203538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.783 [2024-12-12 10:09:50.203553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.783 #32 NEW cov: 12444 ft: 15924 corp: 30/1124b lim: 50 exec/s: 32 rss: 74Mb L: 48/50 MS: 1 InsertRepeatedBytes- 00:07:36.783 [2024-12-12 10:09:50.243250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.783 [2024-12-12 10:09:50.243277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.243316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.783 [2024-12-12 10:09:50.243332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.243391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.783 [2024-12-12 10:09:50.243406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.783 #33 NEW cov: 12444 ft: 15944 corp: 31/1162b lim: 50 exec/s: 33 rss: 74Mb L: 38/50 MS: 1 ShuffleBytes- 00:07:36.783 [2024-12-12 10:09:50.283059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.783 [2024-12-12 10:09:50.283086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.783 #34 NEW cov: 12444 ft: 15951 corp: 32/1177b lim: 50 exec/s: 34 rss: 74Mb L: 15/50 MS: 1 ChangeBinInt- 00:07:36.783 [2024-12-12 10:09:50.323610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.783 [2024-12-12 10:09:50.323637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.323704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.783 [2024-12-12 10:09:50.323725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.323784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.783 [2024-12-12 10:09:50.323800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.323860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:36.783 [2024-12-12 10:09:50.323876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:36.783 #35 NEW cov: 12444 ft: 15984 corp: 33/1222b lim: 50 exec/s: 35 rss: 74Mb L: 45/50 MS: 1 InsertRepeatedBytes- 00:07:36.783 [2024-12-12 10:09:50.383633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:36.783 [2024-12-12 10:09:50.383660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.383708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:36.783 [2024-12-12 10:09:50.383727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:36.783 [2024-12-12 10:09:50.383787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:36.783 [2024-12-12 10:09:50.383802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.041 #36 NEW cov: 12444 ft: 16002 corp: 34/1260b lim: 50 exec/s: 36 rss: 75Mb L: 38/50 MS: 1 ChangeBit- 00:07:37.041 [2024-12-12 10:09:50.443797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:37.041 [2024-12-12 10:09:50.443823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.041 [2024-12-12 10:09:50.443862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:37.041 [2024-12-12 10:09:50.443877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.041 [2024-12-12 10:09:50.443937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:37.041 [2024-12-12 10:09:50.443952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.041 #37 NEW cov: 12444 ft: 16007 corp: 35/1298b lim: 50 exec/s: 37 rss: 75Mb L: 38/50 MS: 1 InsertByte- 00:07:37.041 [2024-12-12 10:09:50.483917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:37.041 [2024-12-12 10:09:50.483944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.041 [2024-12-12 10:09:50.483987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:37.041 [2024-12-12 10:09:50.484002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.041 [2024-12-12 10:09:50.484063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:37.041 [2024-12-12 10:09:50.484078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.041 #38 NEW cov: 12444 ft: 16021 corp: 36/1336b lim: 50 exec/s: 19 rss: 75Mb L: 38/50 MS: 1 ShuffleBytes- 00:07:37.041 #38 DONE cov: 12444 ft: 16021 corp: 36/1336b lim: 50 exec/s: 19 rss: 75Mb 00:07:37.041 ###### Recommended dictionary. ###### 00:07:37.041 "\266;\335\277\024=\002\000" # Uses: 1 00:07:37.041 ###### End of recommended dictionary. ###### 00:07:37.041 Done 38 runs in 2 second(s) 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:37.041 10:09:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:37.041 [2024-12-12 10:09:50.678033] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:37.041 [2024-12-12 10:09:50.678108] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479199 ] 00:07:37.611 [2024-12-12 10:09:50.957507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.611 [2024-12-12 10:09:51.008913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.611 [2024-12-12 10:09:51.068311] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.611 [2024-12-12 10:09:51.084643] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:37.611 INFO: Running with entropic power schedule (0xFF, 100). 00:07:37.611 INFO: Seed: 3169728520 00:07:37.611 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:37.611 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:37.611 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:37.611 INFO: A corpus is not provided, starting from an empty corpus 00:07:37.611 #2 INITED exec/s: 0 rss: 65Mb 00:07:37.611 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:37.611 This may also happen if the target rejected all inputs we tried so far 00:07:37.611 [2024-12-12 10:09:51.129481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.611 [2024-12-12 10:09:51.129517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.611 [2024-12-12 10:09:51.129568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.611 [2024-12-12 10:09:51.129586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.611 [2024-12-12 10:09:51.129616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.611 [2024-12-12 10:09:51.129633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:37.872 NEW_FUNC[1/718]: 0x463418 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:37.872 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:37.872 #3 NEW cov: 12243 ft: 12225 corp: 2/63b lim: 85 exec/s: 0 rss: 73Mb L: 62/62 MS: 1 InsertRepeatedBytes- 00:07:37.872 [2024-12-12 10:09:51.500419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:37.872 [2024-12-12 10:09:51.500457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:37.872 [2024-12-12 10:09:51.500508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:37.872 [2024-12-12 10:09:51.500526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:37.872 [2024-12-12 10:09:51.500555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:37.872 [2024-12-12 10:09:51.500571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.131 #6 NEW cov: 12356 ft: 12784 corp: 3/122b lim: 85 exec/s: 0 rss: 73Mb L: 59/62 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:38.131 [2024-12-12 10:09:51.560380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.131 [2024-12-12 10:09:51.560409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.131 [2024-12-12 10:09:51.560458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.131 [2024-12-12 10:09:51.560476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.131 #7 NEW cov: 12362 ft: 13468 corp: 4/160b lim: 85 exec/s: 0 rss: 73Mb L: 38/62 MS: 1 EraseBytes- 00:07:38.131 [2024-12-12 10:09:51.650683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.131 [2024-12-12 10:09:51.650711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.131 [2024-12-12 10:09:51.650766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.131 [2024-12-12 10:09:51.650787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.131 [2024-12-12 10:09:51.650817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.131 [2024-12-12 10:09:51.650833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.131 #8 NEW cov: 12447 ft: 13698 corp: 5/227b lim: 85 exec/s: 0 rss: 73Mb L: 67/67 MS: 1 InsertRepeatedBytes- 00:07:38.131 [2024-12-12 10:09:51.740877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.131 [2024-12-12 10:09:51.740906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.131 [2024-12-12 10:09:51.740952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.131 [2024-12-12 10:09:51.740970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.131 [2024-12-12 10:09:51.741000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.131 [2024-12-12 10:09:51.741016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.391 #11 NEW cov: 12447 ft: 13818 corp: 6/282b lim: 85 exec/s: 0 rss: 73Mb L: 55/67 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:38.391 [2024-12-12 10:09:51.791051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.391 [2024-12-12 10:09:51.791079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.391 [2024-12-12 10:09:51.791127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.391 [2024-12-12 10:09:51.791144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.391 [2024-12-12 10:09:51.791173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.391 [2024-12-12 10:09:51.791189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.391 #12 NEW cov: 12447 ft: 14003 corp: 7/341b lim: 85 exec/s: 0 rss: 73Mb L: 59/67 MS: 1 ShuffleBytes- 00:07:38.391 [2024-12-12 10:09:51.851173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.391 [2024-12-12 10:09:51.851201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.391 [2024-12-12 10:09:51.851249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.391 [2024-12-12 10:09:51.851266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.391 [2024-12-12 10:09:51.851296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.391 [2024-12-12 10:09:51.851312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.391 #13 NEW cov: 12447 ft: 14053 corp: 8/407b lim: 85 exec/s: 0 rss: 73Mb L: 66/67 MS: 1 InsertRepeatedBytes- 00:07:38.391 [2024-12-12 10:09:51.941516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.391 [2024-12-12 10:09:51.941545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.391 [2024-12-12 10:09:51.941593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.391 [2024-12-12 10:09:51.941610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.391 [2024-12-12 10:09:51.941644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.391 [2024-12-12 10:09:51.941660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.391 [2024-12-12 10:09:51.941689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.391 [2024-12-12 10:09:51.941704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.391 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:38.391 #14 NEW cov: 12470 ft: 14437 corp: 9/486b lim: 85 exec/s: 0 rss: 74Mb L: 79/79 MS: 1 CrossOver- 00:07:38.650 [2024-12-12 10:09:52.031733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.650 [2024-12-12 10:09:52.031765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.650 [2024-12-12 10:09:52.031798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.650 [2024-12-12 10:09:52.031816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.650 [2024-12-12 10:09:52.031847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.650 [2024-12-12 10:09:52.031863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.651 #15 NEW cov: 12470 ft: 14514 corp: 10/545b lim: 85 exec/s: 0 rss: 74Mb L: 59/79 MS: 1 ShuffleBytes- 00:07:38.651 [2024-12-12 10:09:52.081775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.651 [2024-12-12 10:09:52.081804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.651 [2024-12-12 10:09:52.081852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.651 [2024-12-12 10:09:52.081870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.651 [2024-12-12 10:09:52.081900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.651 [2024-12-12 10:09:52.081916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.651 #16 NEW cov: 12470 ft: 14598 corp: 11/605b lim: 85 exec/s: 16 rss: 74Mb L: 60/79 MS: 1 InsertByte- 00:07:38.651 [2024-12-12 10:09:52.141993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.651 [2024-12-12 10:09:52.142024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.651 [2024-12-12 10:09:52.142057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.651 [2024-12-12 10:09:52.142074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.651 [2024-12-12 10:09:52.142105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.651 [2024-12-12 10:09:52.142122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.651 #17 NEW cov: 12470 ft: 14619 corp: 12/664b lim: 85 exec/s: 17 rss: 74Mb L: 59/79 MS: 1 ChangeByte- 00:07:38.651 [2024-12-12 10:09:52.232302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.651 [2024-12-12 10:09:52.232332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.651 [2024-12-12 10:09:52.232370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.651 [2024-12-12 10:09:52.232387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.651 [2024-12-12 10:09:52.232417] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.651 [2024-12-12 10:09:52.232432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.651 [2024-12-12 10:09:52.232461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.651 [2024-12-12 10:09:52.232476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.911 #18 NEW cov: 12470 ft: 14738 corp: 13/747b lim: 85 exec/s: 18 rss: 74Mb L: 83/83 MS: 1 CrossOver- 00:07:38.911 [2024-12-12 10:09:52.332566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.911 [2024-12-12 10:09:52.332597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.911 [2024-12-12 10:09:52.332631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.911 [2024-12-12 10:09:52.332649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.911 [2024-12-12 10:09:52.332679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.911 [2024-12-12 10:09:52.332695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.911 [2024-12-12 10:09:52.332749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.911 [2024-12-12 10:09:52.332766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:38.911 #19 NEW cov: 12470 ft: 14763 corp: 14/816b lim: 85 exec/s: 19 rss: 74Mb L: 69/83 MS: 1 InsertRepeatedBytes- 00:07:38.911 [2024-12-12 10:09:52.392482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.911 [2024-12-12 10:09:52.392513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.911 #20 NEW cov: 12470 ft: 15566 corp: 15/849b lim: 85 exec/s: 20 rss: 74Mb L: 33/83 MS: 1 EraseBytes- 00:07:38.911 [2024-12-12 10:09:52.492950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:38.911 [2024-12-12 10:09:52.492979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:38.911 [2024-12-12 10:09:52.493025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:38.911 [2024-12-12 10:09:52.493042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:38.911 [2024-12-12 10:09:52.493072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:38.911 [2024-12-12 10:09:52.493088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:38.911 [2024-12-12 10:09:52.493117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:38.911 [2024-12-12 10:09:52.493132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.171 #21 NEW cov: 12470 ft: 15621 corp: 16/928b lim: 85 exec/s: 21 rss: 74Mb L: 79/83 MS: 1 ChangeByte- 00:07:39.171 [2024-12-12 10:09:52.583104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.171 [2024-12-12 10:09:52.583137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.583185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.171 [2024-12-12 10:09:52.583202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.583232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.171 [2024-12-12 10:09:52.583248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.171 #22 NEW cov: 12470 ft: 15653 corp: 17/989b lim: 85 exec/s: 22 rss: 74Mb L: 61/83 MS: 1 InsertByte- 00:07:39.171 [2024-12-12 10:09:52.673404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.171 [2024-12-12 10:09:52.673432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.673478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.171 [2024-12-12 10:09:52.673495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.673525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.171 [2024-12-12 10:09:52.673541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.673569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:39.171 [2024-12-12 10:09:52.673585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.171 #23 NEW cov: 12470 ft: 15671 corp: 18/1070b lim: 85 exec/s: 23 rss: 74Mb L: 81/83 MS: 1 CrossOver- 00:07:39.171 [2024-12-12 10:09:52.733515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.171 [2024-12-12 10:09:52.733543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.733590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.171 [2024-12-12 10:09:52.733607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.733637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.171 [2024-12-12 10:09:52.733653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.733681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:39.171 [2024-12-12 10:09:52.733697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.171 #24 NEW cov: 12470 ft: 15710 corp: 19/1152b lim: 85 exec/s: 24 rss: 74Mb L: 82/83 MS: 1 InsertRepeatedBytes- 00:07:39.171 [2024-12-12 10:09:52.783809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.171 [2024-12-12 10:09:52.783837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.783884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.171 [2024-12-12 10:09:52.783900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.783934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.171 [2024-12-12 10:09:52.783950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.171 [2024-12-12 10:09:52.783978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:39.171 [2024-12-12 10:09:52.783994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.430 #25 NEW cov: 12470 ft: 15755 corp: 20/1232b lim: 85 exec/s: 25 rss: 74Mb L: 80/83 MS: 1 InsertRepeatedBytes- 00:07:39.430 [2024-12-12 10:09:52.843909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.430 [2024-12-12 10:09:52.843936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:52.843983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.430 [2024-12-12 10:09:52.844000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:52.844030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.430 [2024-12-12 10:09:52.844046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.430 #26 NEW cov: 12470 ft: 15768 corp: 21/1296b lim: 85 exec/s: 26 rss: 74Mb L: 64/83 MS: 1 EraseBytes- 00:07:39.430 [2024-12-12 10:09:52.934175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.430 [2024-12-12 10:09:52.934204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:52.934251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.430 [2024-12-12 10:09:52.934268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:52.934298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.430 [2024-12-12 10:09:52.934314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.430 #27 NEW cov: 12470 ft: 15781 corp: 22/1355b lim: 85 exec/s: 27 rss: 74Mb L: 59/83 MS: 1 ChangeByte- 00:07:39.430 [2024-12-12 10:09:52.984276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.430 [2024-12-12 10:09:52.984305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:52.984352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.430 [2024-12-12 10:09:52.984368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:52.984398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.430 [2024-12-12 10:09:52.984415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.430 #28 NEW cov: 12470 ft: 15786 corp: 23/1418b lim: 85 exec/s: 28 rss: 74Mb L: 63/83 MS: 1 InsertByte- 00:07:39.430 [2024-12-12 10:09:53.034406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.430 [2024-12-12 10:09:53.034435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:53.034481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.430 [2024-12-12 10:09:53.034503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.430 [2024-12-12 10:09:53.034533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.430 [2024-12-12 10:09:53.034549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.690 #29 NEW cov: 12470 ft: 15803 corp: 24/1485b lim: 85 exec/s: 29 rss: 74Mb L: 67/83 MS: 1 ChangeByte- 00:07:39.690 [2024-12-12 10:09:53.124726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:39.690 [2024-12-12 10:09:53.124754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:39.690 [2024-12-12 10:09:53.124799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:39.690 [2024-12-12 10:09:53.124817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:39.690 [2024-12-12 10:09:53.124847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:39.690 [2024-12-12 10:09:53.124863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:39.690 [2024-12-12 10:09:53.124892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:39.690 [2024-12-12 10:09:53.124908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:39.690 #30 NEW cov: 12470 ft: 15812 corp: 25/1565b lim: 85 exec/s: 15 rss: 74Mb L: 80/83 MS: 1 InsertByte- 00:07:39.690 #30 DONE cov: 12470 ft: 15812 corp: 25/1565b lim: 85 exec/s: 15 rss: 74Mb 00:07:39.690 Done 30 runs in 2 second(s) 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.690 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.691 10:09:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:39.691 [2024-12-12 10:09:53.313623] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:39.691 [2024-12-12 10:09:53.313713] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479677 ] 00:07:40.018 [2024-12-12 10:09:53.592844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.295 [2024-12-12 10:09:53.649477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.295 [2024-12-12 10:09:53.709098] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.295 [2024-12-12 10:09:53.725429] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:40.295 INFO: Running with entropic power schedule (0xFF, 100). 00:07:40.295 INFO: Seed: 1514764638 00:07:40.295 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:40.295 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:40.295 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:40.295 INFO: A corpus is not provided, starting from an empty corpus 00:07:40.295 #2 INITED exec/s: 0 rss: 65Mb 00:07:40.295 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:40.295 This may also happen if the target rejected all inputs we tried so far 00:07:40.295 [2024-12-12 10:09:53.795369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.295 [2024-12-12 10:09:53.795411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.559 NEW_FUNC[1/717]: 0x466658 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:40.559 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:40.560 #3 NEW cov: 12172 ft: 12145 corp: 2/6b lim: 25 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CMP- DE: "s\000\000\000"- 00:07:40.560 [2024-12-12 10:09:54.146179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.560 [2024-12-12 10:09:54.146219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.560 [2024-12-12 10:09:54.146339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:40.560 [2024-12-12 10:09:54.146366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:40.560 [2024-12-12 10:09:54.146489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:40.560 [2024-12-12 10:09:54.146511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:40.560 #9 NEW cov: 12289 ft: 13178 corp: 3/25b lim: 25 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:40.560 [2024-12-12 10:09:54.185852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.560 [2024-12-12 10:09:54.185877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.835 #10 NEW cov: 12295 ft: 13414 corp: 4/31b lim: 25 exec/s: 0 rss: 73Mb L: 6/19 MS: 1 InsertByte- 00:07:40.835 [2024-12-12 10:09:54.255961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.835 [2024-12-12 10:09:54.255991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.835 #14 NEW cov: 12380 ft: 13704 corp: 5/36b lim: 25 exec/s: 0 rss: 73Mb L: 5/19 MS: 4 CrossOver-ChangeBit-InsertByte-CopyPart- 00:07:40.835 [2024-12-12 10:09:54.326136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.835 [2024-12-12 10:09:54.326167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.835 #15 NEW cov: 12380 ft: 13803 corp: 6/42b lim: 25 exec/s: 0 rss: 73Mb L: 6/19 MS: 1 CopyPart- 00:07:40.836 [2024-12-12 10:09:54.386328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.836 [2024-12-12 10:09:54.386357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:40.836 #16 NEW cov: 12380 ft: 13939 corp: 7/47b lim: 25 exec/s: 0 rss: 73Mb L: 5/19 MS: 1 ChangeBit- 00:07:40.836 [2024-12-12 10:09:54.436509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:40.836 [2024-12-12 10:09:54.436541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.122 #17 NEW cov: 12380 ft: 14025 corp: 8/52b lim: 25 exec/s: 0 rss: 73Mb L: 5/19 MS: 1 CopyPart- 00:07:41.122 [2024-12-12 10:09:54.506748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.122 [2024-12-12 10:09:54.506779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.122 #18 NEW cov: 12380 ft: 14181 corp: 9/58b lim: 25 exec/s: 0 rss: 74Mb L: 6/19 MS: 1 CrossOver- 00:07:41.122 [2024-12-12 10:09:54.577156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.122 [2024-12-12 10:09:54.577187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.122 [2024-12-12 10:09:54.577310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:41.122 [2024-12-12 10:09:54.577334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.122 #21 NEW cov: 12380 ft: 14413 corp: 10/68b lim: 25 exec/s: 0 rss: 74Mb L: 10/19 MS: 3 ChangeByte-CopyPart-CMP- DE: "\001\000\000\000\000\000\000\003"- 00:07:41.122 [2024-12-12 10:09:54.627078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.122 [2024-12-12 10:09:54.627105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.122 #22 NEW cov: 12380 ft: 14467 corp: 11/74b lim: 25 exec/s: 0 rss: 74Mb L: 6/19 MS: 1 InsertByte- 00:07:41.122 [2024-12-12 10:09:54.677193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.122 [2024-12-12 10:09:54.677222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.122 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:41.122 #23 NEW cov: 12403 ft: 14593 corp: 12/80b lim: 25 exec/s: 0 rss: 74Mb L: 6/19 MS: 1 PersAutoDict- DE: "s\000\000\000"- 00:07:41.122 [2024-12-12 10:09:54.747855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.122 [2024-12-12 10:09:54.747884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.122 [2024-12-12 10:09:54.747956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:41.122 [2024-12-12 10:09:54.747980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.122 [2024-12-12 10:09:54.748111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:41.122 [2024-12-12 10:09:54.748132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.424 #24 NEW cov: 12403 ft: 14614 corp: 13/99b lim: 25 exec/s: 24 rss: 74Mb L: 19/19 MS: 1 CopyPart- 00:07:41.424 [2024-12-12 10:09:54.797613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.424 [2024-12-12 10:09:54.797646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.424 #27 NEW cov: 12403 ft: 14678 corp: 14/105b lim: 25 exec/s: 27 rss: 74Mb L: 6/19 MS: 3 InsertByte-InsertByte-InsertRepeatedBytes- 00:07:41.424 [2024-12-12 10:09:54.847770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.424 [2024-12-12 10:09:54.847800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.424 #28 NEW cov: 12403 ft: 14766 corp: 15/114b lim: 25 exec/s: 28 rss: 74Mb L: 9/19 MS: 1 InsertRepeatedBytes- 00:07:41.424 [2024-12-12 10:09:54.897960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.424 [2024-12-12 10:09:54.897991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.424 #33 NEW cov: 12403 ft: 14794 corp: 16/119b lim: 25 exec/s: 33 rss: 74Mb L: 5/19 MS: 5 EraseBytes-ShuffleBytes-PersAutoDict-ChangeBit-CopyPart- DE: "s\000\000\000"- 00:07:41.424 [2024-12-12 10:09:54.948149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.424 [2024-12-12 10:09:54.948182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.424 #34 NEW cov: 12403 ft: 14807 corp: 17/125b lim: 25 exec/s: 34 rss: 74Mb L: 6/19 MS: 1 ChangeBinInt- 00:07:41.424 [2024-12-12 10:09:54.998541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.424 [2024-12-12 10:09:54.998571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.424 [2024-12-12 10:09:54.998685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:41.424 [2024-12-12 10:09:54.998707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.424 #35 NEW cov: 12403 ft: 14828 corp: 18/135b lim: 25 exec/s: 35 rss: 74Mb L: 10/19 MS: 1 InsertRepeatedBytes- 00:07:41.738 [2024-12-12 10:09:55.068572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.738 [2024-12-12 10:09:55.068603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.738 #36 NEW cov: 12403 ft: 14839 corp: 19/140b lim: 25 exec/s: 36 rss: 74Mb L: 5/19 MS: 1 ChangeBit- 00:07:41.738 [2024-12-12 10:09:55.118641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.738 [2024-12-12 10:09:55.118670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.738 #37 NEW cov: 12403 ft: 14891 corp: 20/146b lim: 25 exec/s: 37 rss: 74Mb L: 6/19 MS: 1 ChangeBit- 00:07:41.738 [2024-12-12 10:09:55.168773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.738 [2024-12-12 10:09:55.168803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.738 #38 NEW cov: 12403 ft: 14903 corp: 21/152b lim: 25 exec/s: 38 rss: 74Mb L: 6/19 MS: 1 ChangeBinInt- 00:07:41.738 [2024-12-12 10:09:55.239197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.738 [2024-12-12 10:09:55.239229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.738 [2024-12-12 10:09:55.239357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:41.738 [2024-12-12 10:09:55.239387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.738 #39 NEW cov: 12403 ft: 14987 corp: 22/162b lim: 25 exec/s: 39 rss: 74Mb L: 10/19 MS: 1 InsertByte- 00:07:41.738 [2024-12-12 10:09:55.309568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.738 [2024-12-12 10:09:55.309597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:41.738 [2024-12-12 10:09:55.309687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:41.738 [2024-12-12 10:09:55.309709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:41.738 [2024-12-12 10:09:55.309843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:41.738 [2024-12-12 10:09:55.309868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:41.738 #40 NEW cov: 12403 ft: 15010 corp: 23/181b lim: 25 exec/s: 40 rss: 74Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:41.738 [2024-12-12 10:09:55.359429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:41.738 [2024-12-12 10:09:55.359455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.026 #41 NEW cov: 12403 ft: 15051 corp: 24/187b lim: 25 exec/s: 41 rss: 74Mb L: 6/19 MS: 1 ChangeByte- 00:07:42.026 [2024-12-12 10:09:55.429585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:42.026 [2024-12-12 10:09:55.429612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.026 #42 NEW cov: 12403 ft: 15078 corp: 25/193b lim: 25 exec/s: 42 rss: 74Mb L: 6/19 MS: 1 ChangeBit- 00:07:42.026 [2024-12-12 10:09:55.499795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:42.026 [2024-12-12 10:09:55.499824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.026 #43 NEW cov: 12403 ft: 15112 corp: 26/199b lim: 25 exec/s: 43 rss: 75Mb L: 6/19 MS: 1 PersAutoDict- DE: "s\000\000\000"- 00:07:42.026 [2024-12-12 10:09:55.570013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:42.026 [2024-12-12 10:09:55.570044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.026 #44 NEW cov: 12403 ft: 15134 corp: 27/208b lim: 25 exec/s: 44 rss: 75Mb L: 9/19 MS: 1 ShuffleBytes- 00:07:42.026 [2024-12-12 10:09:55.620125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:42.026 [2024-12-12 10:09:55.620159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.026 #45 NEW cov: 12403 ft: 15153 corp: 28/213b lim: 25 exec/s: 45 rss: 75Mb L: 5/19 MS: 1 ShuffleBytes- 00:07:42.285 [2024-12-12 10:09:55.690616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:42.285 [2024-12-12 10:09:55.690648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.286 [2024-12-12 10:09:55.690766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:42.286 [2024-12-12 10:09:55.690802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.286 #46 NEW cov: 12403 ft: 15169 corp: 29/223b lim: 25 exec/s: 46 rss: 75Mb L: 10/19 MS: 1 ChangeByte- 00:07:42.286 [2024-12-12 10:09:55.760632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:42.286 [2024-12-12 10:09:55.760668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.286 #47 NEW cov: 12403 ft: 15178 corp: 30/232b lim: 25 exec/s: 23 rss: 75Mb L: 9/19 MS: 1 CopyPart- 00:07:42.286 #47 DONE cov: 12403 ft: 15178 corp: 30/232b lim: 25 exec/s: 23 rss: 75Mb 00:07:42.286 ###### Recommended dictionary. ###### 00:07:42.286 "s\000\000\000" # Uses: 3 00:07:42.286 "\001\000\000\000\000\000\000\003" # Uses: 0 00:07:42.286 ###### End of recommended dictionary. ###### 00:07:42.286 Done 47 runs in 2 second(s) 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:42.286 10:09:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:42.545 [2024-12-12 10:09:55.930316] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:42.545 [2024-12-12 10:09:55.930386] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480226 ] 00:07:42.804 [2024-12-12 10:09:56.212336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.804 [2024-12-12 10:09:56.273148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.804 [2024-12-12 10:09:56.332403] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.804 [2024-12-12 10:09:56.348734] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:42.804 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.804 INFO: Seed: 4139755645 00:07:42.804 INFO: Loaded 1 modules (391311 inline 8-bit counters): 391311 [0x2c934cc, 0x2cf2d5b), 00:07:42.804 INFO: Loaded 1 PC tables (391311 PCs): 391311 [0x2cf2d60,0x32eb650), 00:07:42.804 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:42.804 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.804 #2 INITED exec/s: 0 rss: 66Mb 00:07:42.804 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.804 This may also happen if the target rejected all inputs we tried so far 00:07:42.804 [2024-12-12 10:09:56.414468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.804 [2024-12-12 10:09:56.414498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:42.804 [2024-12-12 10:09:56.414541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.804 [2024-12-12 10:09:56.414556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:42.804 [2024-12-12 10:09:56.414611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.804 [2024-12-12 10:09:56.414625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:42.804 [2024-12-12 10:09:56.414681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.804 [2024-12-12 10:09:56.414696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.322 NEW_FUNC[1/718]: 0x467748 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:43.322 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.322 #5 NEW cov: 12230 ft: 12220 corp: 2/96b lim: 100 exec/s: 0 rss: 74Mb L: 95/95 MS: 3 CrossOver-ChangeBit-InsertRepeatedBytes- 00:07:43.322 [2024-12-12 10:09:56.755620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477329568454872 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.322 [2024-12-12 10:09:56.755678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.322 #9 NEW cov: 12360 ft: 13843 corp: 3/126b lim: 100 exec/s: 0 rss: 74Mb L: 30/95 MS: 4 ShuffleBytes-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:07:43.322 [2024-12-12 10:09:56.816527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.322 [2024-12-12 10:09:56.816563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.322 [2024-12-12 10:09:56.816685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.322 [2024-12-12 10:09:56.816710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.322 [2024-12-12 10:09:56.816831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.322 [2024-12-12 10:09:56.816856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.322 [2024-12-12 10:09:56.816987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.322 [2024-12-12 10:09:56.817009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.322 #10 NEW cov: 12366 ft: 14087 corp: 4/221b lim: 100 exec/s: 0 rss: 74Mb L: 95/95 MS: 1 ChangeBinInt- 00:07:43.322 [2024-12-12 10:09:56.886653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.322 [2024-12-12 10:09:56.886688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.322 [2024-12-12 10:09:56.886832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.323 [2024-12-12 10:09:56.886855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.323 [2024-12-12 10:09:56.886980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.323 [2024-12-12 10:09:56.887001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.323 [2024-12-12 10:09:56.887124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.323 [2024-12-12 10:09:56.887147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.323 #11 NEW cov: 12451 ft: 14316 corp: 5/316b lim: 100 exec/s: 0 rss: 74Mb L: 95/95 MS: 1 ShuffleBytes- 00:07:43.323 [2024-12-12 10:09:56.956879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.323 [2024-12-12 10:09:56.956913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.323 [2024-12-12 10:09:56.957005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.323 [2024-12-12 10:09:56.957028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.323 [2024-12-12 10:09:56.957159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.323 [2024-12-12 10:09:56.957181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.323 [2024-12-12 10:09:56.957309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.323 [2024-12-12 10:09:56.957332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.582 #12 NEW cov: 12451 ft: 14417 corp: 6/415b lim: 100 exec/s: 0 rss: 74Mb L: 99/99 MS: 1 CrossOver- 00:07:43.582 [2024-12-12 10:09:57.006915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17940362859850037496 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.006948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.582 [2024-12-12 10:09:57.007071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17940362863843014904 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.007098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.582 [2024-12-12 10:09:57.007228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:17940362863843014904 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.007267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.582 [2024-12-12 10:09:57.007393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17940362863843014904 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.007420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.582 #14 NEW cov: 12451 ft: 14481 corp: 7/501b lim: 100 exec/s: 0 rss: 74Mb L: 86/99 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:43.582 [2024-12-12 10:09:57.056335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:184549376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.056370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.582 #16 NEW cov: 12451 ft: 14539 corp: 8/530b lim: 100 exec/s: 0 rss: 74Mb L: 29/99 MS: 2 ChangeBit-CrossOver- 00:07:43.582 [2024-12-12 10:09:57.107330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.107368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.582 [2024-12-12 10:09:57.107476] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.107499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.582 [2024-12-12 10:09:57.107615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.107637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.582 [2024-12-12 10:09:57.107771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.107798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.582 #17 NEW cov: 12451 ft: 14609 corp: 9/625b lim: 100 exec/s: 0 rss: 74Mb L: 95/99 MS: 1 ChangeByte- 00:07:43.582 [2024-12-12 10:09:57.156686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:184549376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.582 [2024-12-12 10:09:57.156720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.582 #18 NEW cov: 12451 ft: 14681 corp: 10/655b lim: 100 exec/s: 0 rss: 74Mb L: 30/99 MS: 1 InsertByte- 00:07:43.841 [2024-12-12 10:09:57.227662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.841 [2024-12-12 10:09:57.227695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.841 [2024-12-12 10:09:57.227774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:50331648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.841 [2024-12-12 10:09:57.227799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.227919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.227940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.228057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.228082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.842 #19 NEW cov: 12451 ft: 14739 corp: 11/750b lim: 100 exec/s: 0 rss: 74Mb L: 95/99 MS: 1 CMP- DE: "\377\003\000\000\000\000\000\000"- 00:07:43.842 [2024-12-12 10:09:57.297834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17940362859850037496 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.297865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.297939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17940362863843014904 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.297976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.298123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.298151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.298269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:17940362863843014904 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.298293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:43.842 NEW_FUNC[1/1]: 0x1c5f728 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:43.842 #20 NEW cov: 12474 ft: 14772 corp: 12/836b lim: 100 exec/s: 0 rss: 74Mb L: 86/99 MS: 1 CrossOver- 00:07:43.842 [2024-12-12 10:09:57.367255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:184549376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.367281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.842 #21 NEW cov: 12474 ft: 14787 corp: 13/866b lim: 100 exec/s: 0 rss: 74Mb L: 30/99 MS: 1 InsertByte- 00:07:43.842 [2024-12-12 10:09:57.417442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.417475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.842 #26 NEW cov: 12474 ft: 14838 corp: 14/890b lim: 100 exec/s: 26 rss: 74Mb L: 24/99 MS: 5 ChangeBit-ShuffleBytes-CMP-ShuffleBytes-InsertRepeatedBytes- DE: "\000\000"- 00:07:43.842 [2024-12-12 10:09:57.468449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.468482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.468566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.468589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.468704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.468728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:43.842 [2024-12-12 10:09:57.468850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.842 [2024-12-12 10:09:57.468868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.101 #27 NEW cov: 12474 ft: 14863 corp: 15/985b lim: 100 exec/s: 27 rss: 75Mb L: 95/99 MS: 1 ChangeBinInt- 00:07:44.101 [2024-12-12 10:09:57.538583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15625477329568454872 len:55513 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.538616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.101 [2024-12-12 10:09:57.538732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15625476405311625432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.538760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.101 [2024-12-12 10:09:57.538888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.538912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.101 [2024-12-12 10:09:57.539040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.539066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.101 #28 NEW cov: 12474 ft: 14869 corp: 16/1072b lim: 100 exec/s: 28 rss: 75Mb L: 87/99 MS: 1 InsertRepeatedBytes- 00:07:44.101 [2024-12-12 10:09:57.608858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:24321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.608891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.101 [2024-12-12 10:09:57.609005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.609032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.101 [2024-12-12 10:09:57.609149] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.609185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.101 [2024-12-12 10:09:57.609313] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.609337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.101 #29 NEW cov: 12474 ft: 14878 corp: 17/1167b lim: 100 exec/s: 29 rss: 75Mb L: 95/99 MS: 1 ChangeBinInt- 00:07:44.101 [2024-12-12 10:09:57.658162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:184549376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.658197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.101 #30 NEW cov: 12474 ft: 14894 corp: 18/1196b lim: 100 exec/s: 30 rss: 75Mb L: 29/99 MS: 1 ChangeByte- 00:07:44.101 [2024-12-12 10:09:57.708851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:184549376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.101 [2024-12-12 10:09:57.708885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.102 [2024-12-12 10:09:57.709012] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492443134787656 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.102 [2024-12-12 10:09:57.709037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.102 [2024-12-12 10:09:57.709170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.102 [2024-12-12 10:09:57.709192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.360 #31 NEW cov: 12474 ft: 15250 corp: 19/1265b lim: 100 exec/s: 31 rss: 75Mb L: 69/99 MS: 1 InsertRepeatedBytes- 00:07:44.360 [2024-12-12 10:09:57.779444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.360 [2024-12-12 10:09:57.779482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.360 [2024-12-12 10:09:57.779581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.360 [2024-12-12 10:09:57.779606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.360 [2024-12-12 10:09:57.779734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.360 [2024-12-12 10:09:57.779756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.360 [2024-12-12 10:09:57.779891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.360 [2024-12-12 10:09:57.779915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.361 #32 NEW cov: 12474 ft: 15339 corp: 20/1364b lim: 100 exec/s: 32 rss: 75Mb L: 99/99 MS: 1 CopyPart- 00:07:44.361 [2024-12-12 10:09:57.829539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.829571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.829691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.829718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.829842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.829863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.829989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:3298534883328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.830015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.361 #33 NEW cov: 12474 ft: 15374 corp: 21/1461b lim: 100 exec/s: 33 rss: 75Mb L: 97/99 MS: 1 CMP- DE: "\000\003"- 00:07:44.361 [2024-12-12 10:09:57.899724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4398046511104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.899758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.899828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.899852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.899985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.900009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.900127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:3298534883328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.900154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.361 #34 NEW cov: 12474 ft: 15391 corp: 22/1558b lim: 100 exec/s: 34 rss: 75Mb L: 97/99 MS: 1 ChangeBit- 00:07:44.361 [2024-12-12 10:09:57.969985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.970017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.970108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.970131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.970253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.970273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.361 [2024-12-12 10:09:57.970394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.361 [2024-12-12 10:09:57.970414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.620 #35 NEW cov: 12474 ft: 15433 corp: 23/1640b lim: 100 exec/s: 35 rss: 75Mb L: 82/99 MS: 1 EraseBytes- 00:07:44.620 [2024-12-12 10:09:58.040243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.040274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.620 [2024-12-12 10:09:58.040360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.040382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.620 [2024-12-12 10:09:58.040506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.040528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.620 [2024-12-12 10:09:58.040654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.040683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.620 #36 NEW cov: 12474 ft: 15444 corp: 24/1735b lim: 100 exec/s: 36 rss: 75Mb L: 95/99 MS: 1 ChangeBit- 00:07:44.620 [2024-12-12 10:09:58.089766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17940362859850037496 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.089795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.620 [2024-12-12 10:09:58.089897] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:17940362863843014904 len:63737 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.089928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.620 #37 NEW cov: 12474 ft: 15759 corp: 25/1787b lim: 100 exec/s: 37 rss: 75Mb L: 52/99 MS: 1 EraseBytes- 00:07:44.620 [2024-12-12 10:09:58.139628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.139655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.620 #38 NEW cov: 12474 ft: 15771 corp: 26/1811b lim: 100 exec/s: 38 rss: 75Mb L: 24/99 MS: 1 ChangeBit- 00:07:44.620 [2024-12-12 10:09:58.210705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4398046511104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.210747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.620 [2024-12-12 10:09:58.210848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.210873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.620 [2024-12-12 10:09:58.210997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.211020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.620 [2024-12-12 10:09:58.211147] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:3298534883328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.620 [2024-12-12 10:09:58.211173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.620 #39 NEW cov: 12474 ft: 15788 corp: 27/1908b lim: 100 exec/s: 39 rss: 75Mb L: 97/99 MS: 1 CopyPart- 00:07:44.879 [2024-12-12 10:09:58.280881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.280916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.281008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.281028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.281153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.281178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.281304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.281329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.879 #40 NEW cov: 12474 ft: 15791 corp: 28/2007b lim: 100 exec/s: 40 rss: 75Mb L: 99/99 MS: 1 CopyPart- 00:07:44.879 [2024-12-12 10:09:58.351132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:7089336936655118946 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.351167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.351298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.351323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.351443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.351469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.351602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:7089336938131513954 len:25187 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.351629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.879 #41 NEW cov: 12474 ft: 15801 corp: 29/2097b lim: 100 exec/s: 41 rss: 75Mb L: 90/99 MS: 1 InsertRepeatedBytes- 00:07:44.879 [2024-12-12 10:09:58.401286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4398046511104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.401321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.401427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.401451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.401570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.879 [2024-12-12 10:09:58.401590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:44.879 [2024-12-12 10:09:58.401719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:12884901888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:44.880 [2024-12-12 10:09:58.401747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:44.880 #42 NEW cov: 12474 ft: 15815 corp: 30/2195b lim: 100 exec/s: 21 rss: 76Mb L: 98/99 MS: 1 CopyPart- 00:07:44.880 #42 DONE cov: 12474 ft: 15815 corp: 30/2195b lim: 100 exec/s: 21 rss: 76Mb 00:07:44.880 ###### Recommended dictionary. ###### 00:07:44.880 "\377\003\000\000\000\000\000\000" # Uses: 0 00:07:44.880 "\000\000" # Uses: 0 00:07:44.880 "\000\003" # Uses: 0 00:07:44.880 ###### End of recommended dictionary. ###### 00:07:44.880 Done 42 runs in 2 second(s) 00:07:45.139 10:09:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:45.139 10:09:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:45.139 10:09:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.139 10:09:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:45.139 00:07:45.139 real 1m6.018s 00:07:45.139 user 1m40.339s 00:07:45.139 sys 0m9.319s 00:07:45.139 10:09:58 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.139 10:09:58 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:45.139 ************************************ 00:07:45.139 END TEST nvmf_llvm_fuzz 00:07:45.139 ************************************ 00:07:45.139 10:09:58 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:45.139 10:09:58 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:45.139 10:09:58 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:45.139 10:09:58 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.139 10:09:58 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.139 10:09:58 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:45.139 ************************************ 00:07:45.139 START TEST vfio_llvm_fuzz 00:07:45.139 ************************************ 00:07:45.139 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:45.139 * Looking for test storage... 00:07:45.139 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:45.139 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.139 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.139 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.401 ' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.401 ' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.401 ' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.401 ' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:45.401 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:45.402 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:45.402 #define SPDK_CONFIG_H 00:07:45.402 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:45.402 #define SPDK_CONFIG_APPS 1 00:07:45.403 #define SPDK_CONFIG_ARCH native 00:07:45.403 #undef SPDK_CONFIG_ASAN 00:07:45.403 #undef SPDK_CONFIG_AVAHI 00:07:45.403 #undef SPDK_CONFIG_CET 00:07:45.403 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:45.403 #define SPDK_CONFIG_COVERAGE 1 00:07:45.403 #define SPDK_CONFIG_CROSS_PREFIX 00:07:45.403 #undef SPDK_CONFIG_CRYPTO 00:07:45.403 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:45.403 #undef SPDK_CONFIG_CUSTOMOCF 00:07:45.403 #undef SPDK_CONFIG_DAOS 00:07:45.403 #define SPDK_CONFIG_DAOS_DIR 00:07:45.403 #define SPDK_CONFIG_DEBUG 1 00:07:45.403 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:45.403 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:45.403 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:45.403 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:45.403 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:45.403 #undef SPDK_CONFIG_DPDK_UADK 00:07:45.403 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:45.403 #define SPDK_CONFIG_EXAMPLES 1 00:07:45.403 #undef SPDK_CONFIG_FC 00:07:45.403 #define SPDK_CONFIG_FC_PATH 00:07:45.403 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:45.403 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:45.403 #define SPDK_CONFIG_FSDEV 1 00:07:45.403 #undef SPDK_CONFIG_FUSE 00:07:45.403 #define SPDK_CONFIG_FUZZER 1 00:07:45.403 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:45.403 #undef SPDK_CONFIG_GOLANG 00:07:45.403 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:45.403 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:45.403 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:45.403 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:45.403 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:45.403 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:45.403 #undef SPDK_CONFIG_HAVE_LZ4 00:07:45.403 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:45.403 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:45.403 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:45.403 #define SPDK_CONFIG_IDXD 1 00:07:45.403 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:45.403 #undef SPDK_CONFIG_IPSEC_MB 00:07:45.403 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:45.403 #define SPDK_CONFIG_ISAL 1 00:07:45.403 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:45.403 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:45.403 #define SPDK_CONFIG_LIBDIR 00:07:45.403 #undef SPDK_CONFIG_LTO 00:07:45.403 #define SPDK_CONFIG_MAX_LCORES 128 00:07:45.403 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:07:45.403 #define SPDK_CONFIG_NVME_CUSE 1 00:07:45.403 #undef SPDK_CONFIG_OCF 00:07:45.403 #define SPDK_CONFIG_OCF_PATH 00:07:45.403 #define SPDK_CONFIG_OPENSSL_PATH 00:07:45.403 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:45.403 #define SPDK_CONFIG_PGO_DIR 00:07:45.403 #undef SPDK_CONFIG_PGO_USE 00:07:45.403 #define SPDK_CONFIG_PREFIX /usr/local 00:07:45.403 #undef SPDK_CONFIG_RAID5F 00:07:45.403 #undef SPDK_CONFIG_RBD 00:07:45.403 #define SPDK_CONFIG_RDMA 1 00:07:45.403 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:45.403 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:45.403 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:45.403 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:45.403 #undef SPDK_CONFIG_SHARED 00:07:45.403 #undef SPDK_CONFIG_SMA 00:07:45.403 #define SPDK_CONFIG_TESTS 1 00:07:45.403 #undef SPDK_CONFIG_TSAN 00:07:45.403 #define SPDK_CONFIG_UBLK 1 00:07:45.403 #define SPDK_CONFIG_UBSAN 1 00:07:45.403 #undef SPDK_CONFIG_UNIT_TESTS 00:07:45.403 #undef SPDK_CONFIG_URING 00:07:45.403 #define SPDK_CONFIG_URING_PATH 00:07:45.403 #undef SPDK_CONFIG_URING_ZNS 00:07:45.403 #undef SPDK_CONFIG_USDT 00:07:45.403 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:45.403 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:45.403 #define SPDK_CONFIG_VFIO_USER 1 00:07:45.403 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:45.403 #define SPDK_CONFIG_VHOST 1 00:07:45.403 #define SPDK_CONFIG_VIRTIO 1 00:07:45.403 #undef SPDK_CONFIG_VTUNE 00:07:45.403 #define SPDK_CONFIG_VTUNE_DIR 00:07:45.403 #define SPDK_CONFIG_WERROR 1 00:07:45.403 #define SPDK_CONFIG_WPDK_DIR 00:07:45.403 #undef SPDK_CONFIG_XNVME 00:07:45.403 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:45.403 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.404 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 480791 ]] 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 480791 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.puFuN9 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.puFuN9/tests/vfio /tmp/spdk.puFuN9 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=785162240 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4499267584 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=54102056960 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730619392 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=7628562432 00:07:45.405 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30861881344 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865309696 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340125696 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346126336 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=6000640 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30865125376 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865309696 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=184320 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173048832 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173061120 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:07:45.406 * Looking for test storage... 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=54102056960 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=9843154944 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:45.406 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:45.406 10:09:58 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:45.406 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:45.406 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:45.406 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:45.406 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.406 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.406 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.665 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.665 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.665 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.665 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.665 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.665 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.665 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.666 --rc genhtml_branch_coverage=1 00:07:45.666 --rc genhtml_function_coverage=1 00:07:45.666 --rc genhtml_legend=1 00:07:45.666 --rc geninfo_all_blocks=1 00:07:45.666 --rc geninfo_unexecuted_blocks=1 00:07:45.666 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.666 ' 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.666 --rc genhtml_branch_coverage=1 00:07:45.666 --rc genhtml_function_coverage=1 00:07:45.666 --rc genhtml_legend=1 00:07:45.666 --rc geninfo_all_blocks=1 00:07:45.666 --rc geninfo_unexecuted_blocks=1 00:07:45.666 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.666 ' 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.666 --rc genhtml_branch_coverage=1 00:07:45.666 --rc genhtml_function_coverage=1 00:07:45.666 --rc genhtml_legend=1 00:07:45.666 --rc geninfo_all_blocks=1 00:07:45.666 --rc geninfo_unexecuted_blocks=1 00:07:45.666 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.666 ' 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.666 --rc genhtml_branch_coverage=1 00:07:45.666 --rc genhtml_function_coverage=1 00:07:45.666 --rc genhtml_legend=1 00:07:45.666 --rc geninfo_all_blocks=1 00:07:45.666 --rc geninfo_unexecuted_blocks=1 00:07:45.666 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:45.666 ' 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:45.666 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:45.666 10:09:59 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:45.666 [2024-12-12 10:09:59.147122] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:45.666 [2024-12-12 10:09:59.147205] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480855 ] 00:07:45.666 [2024-12-12 10:09:59.243935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.666 [2024-12-12 10:09:59.284664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.925 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.925 INFO: Seed: 2951783107 00:07:45.925 INFO: Loaded 1 modules (388547 inline 8-bit counters): 388547 [0x2c55ccc, 0x2cb4a8f), 00:07:45.925 INFO: Loaded 1 PC tables (388547 PCs): 388547 [0x2cb4a90,0x32a26c0), 00:07:45.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:45.925 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.925 #2 INITED exec/s: 0 rss: 68Mb 00:07:45.925 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.925 This may also happen if the target rejected all inputs we tried so far 00:07:45.925 [2024-12-12 10:09:59.523371] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:46.443 NEW_FUNC[1/675]: 0x43b608 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:46.443 NEW_FUNC[2/675]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:46.443 #12 NEW cov: 11248 ft: 10818 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 5 ChangeByte-CopyPart-CrossOver-ChangeBit-CMP- DE: "\377\377\377\003"- 00:07:46.443 #17 NEW cov: 11262 ft: 14368 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 5 InsertRepeatedBytes-ShuffleBytes-ChangeByte-InsertByte-InsertByte- 00:07:46.701 #18 NEW cov: 11262 ft: 16304 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CrossOver- 00:07:46.701 NEW_FUNC[1/1]: 0x1c2bb78 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:46.701 #19 NEW cov: 11282 ft: 17062 corp: 5/25b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:46.960 #20 NEW cov: 11282 ft: 17378 corp: 6/31b lim: 6 exec/s: 0 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:07:46.960 #21 NEW cov: 11282 ft: 17691 corp: 7/37b lim: 6 exec/s: 21 rss: 77Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:47.218 #22 NEW cov: 11282 ft: 17801 corp: 8/43b lim: 6 exec/s: 22 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:47.218 #23 NEW cov: 11282 ft: 18454 corp: 9/49b lim: 6 exec/s: 23 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:07:47.477 #29 NEW cov: 11282 ft: 18800 corp: 10/55b lim: 6 exec/s: 29 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:07:47.477 #31 NEW cov: 11282 ft: 18853 corp: 11/61b lim: 6 exec/s: 31 rss: 77Mb L: 6/6 MS: 2 EraseBytes-CopyPart- 00:07:47.736 #32 NEW cov: 11282 ft: 18958 corp: 12/67b lim: 6 exec/s: 32 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:07:47.736 #33 NEW cov: 11289 ft: 19013 corp: 13/73b lim: 6 exec/s: 33 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:07:47.995 #36 NEW cov: 11289 ft: 19231 corp: 14/79b lim: 6 exec/s: 36 rss: 77Mb L: 6/6 MS: 3 CrossOver-PersAutoDict-CopyPart- DE: "\377\377\377\003"- 00:07:47.995 #37 NEW cov: 11289 ft: 19476 corp: 15/85b lim: 6 exec/s: 18 rss: 77Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:47.995 #37 DONE cov: 11289 ft: 19476 corp: 15/85b lim: 6 exec/s: 18 rss: 77Mb 00:07:47.995 ###### Recommended dictionary. ###### 00:07:47.995 "\377\377\377\003" # Uses: 1 00:07:47.995 ###### End of recommended dictionary. ###### 00:07:47.995 Done 37 runs in 2 second(s) 00:07:47.995 [2024-12-12 10:10:01.600915] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:48.254 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:48.254 10:10:01 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:48.254 [2024-12-12 10:10:01.870589] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:48.254 [2024-12-12 10:10:01.870656] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481392 ] 00:07:48.513 [2024-12-12 10:10:01.964629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.513 [2024-12-12 10:10:02.004642] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.772 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.772 INFO: Seed: 1379814765 00:07:48.772 INFO: Loaded 1 modules (388547 inline 8-bit counters): 388547 [0x2c55ccc, 0x2cb4a8f), 00:07:48.772 INFO: Loaded 1 PC tables (388547 PCs): 388547 [0x2cb4a90,0x32a26c0), 00:07:48.772 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:48.772 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.772 #2 INITED exec/s: 0 rss: 68Mb 00:07:48.772 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.772 This may also happen if the target rejected all inputs we tried so far 00:07:48.772 [2024-12-12 10:10:02.248974] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:48.772 [2024-12-12 10:10:02.322575] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:48.772 [2024-12-12 10:10:02.322602] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:48.772 [2024-12-12 10:10:02.322621] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.290 NEW_FUNC[1/678]: 0x43bba8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:49.290 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:49.290 #6 NEW cov: 11251 ft: 10814 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 InsertByte-InsertByte-ShuffleBytes-InsertByte- 00:07:49.290 [2024-12-12 10:10:02.802954] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:49.290 [2024-12-12 10:10:02.802987] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:49.290 [2024-12-12 10:10:02.803005] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.290 #12 NEW cov: 11265 ft: 14066 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeByte- 00:07:49.549 [2024-12-12 10:10:02.984983] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:49.549 [2024-12-12 10:10:02.985007] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:49.549 [2024-12-12 10:10:02.985025] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.549 NEW_FUNC[1/1]: 0x1c2bb78 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:49.549 #21 NEW cov: 11282 ft: 14844 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 4 EraseBytes-ChangeByte-EraseBytes-CopyPart- 00:07:49.549 [2024-12-12 10:10:03.173962] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:49.549 [2024-12-12 10:10:03.173985] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:49.549 [2024-12-12 10:10:03.174002] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:49.808 #27 NEW cov: 11282 ft: 15955 corp: 5/17b lim: 4 exec/s: 27 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:49.808 [2024-12-12 10:10:03.369383] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:49.808 [2024-12-12 10:10:03.369404] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:49.808 [2024-12-12 10:10:03.369422] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:50.067 #28 NEW cov: 11282 ft: 16447 corp: 6/21b lim: 4 exec/s: 28 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:50.067 [2024-12-12 10:10:03.551513] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:50.067 [2024-12-12 10:10:03.551535] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:50.067 [2024-12-12 10:10:03.551553] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:50.067 #29 NEW cov: 11282 ft: 16947 corp: 7/25b lim: 4 exec/s: 29 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:50.326 [2024-12-12 10:10:03.732781] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:50.326 [2024-12-12 10:10:03.732804] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:50.326 [2024-12-12 10:10:03.732841] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:50.326 #30 NEW cov: 11282 ft: 16976 corp: 8/29b lim: 4 exec/s: 30 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:50.326 [2024-12-12 10:10:03.915891] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:50.326 [2024-12-12 10:10:03.915913] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:50.326 [2024-12-12 10:10:03.915930] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:50.585 #31 NEW cov: 11289 ft: 17255 corp: 9/33b lim: 4 exec/s: 31 rss: 76Mb L: 4/4 MS: 1 CrossOver- 00:07:50.585 [2024-12-12 10:10:04.103084] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:50.585 [2024-12-12 10:10:04.103106] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:50.585 [2024-12-12 10:10:04.103123] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:50.585 #37 NEW cov: 11289 ft: 17396 corp: 10/37b lim: 4 exec/s: 37 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:07:50.844 [2024-12-12 10:10:04.290612] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:50.844 [2024-12-12 10:10:04.290634] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:50.844 [2024-12-12 10:10:04.290651] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:50.844 #38 NEW cov: 11289 ft: 17436 corp: 11/41b lim: 4 exec/s: 19 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:50.844 #38 DONE cov: 11289 ft: 17436 corp: 11/41b lim: 4 exec/s: 19 rss: 77Mb 00:07:50.844 Done 38 runs in 2 second(s) 00:07:50.844 [2024-12-12 10:10:04.414912] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:51.103 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:51.103 10:10:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:51.103 [2024-12-12 10:10:04.684481] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:51.103 [2024-12-12 10:10:04.684550] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481827 ] 00:07:51.362 [2024-12-12 10:10:04.779291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.362 [2024-12-12 10:10:04.819921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.622 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.622 INFO: Seed: 4189827112 00:07:51.622 INFO: Loaded 1 modules (388547 inline 8-bit counters): 388547 [0x2c55ccc, 0x2cb4a8f), 00:07:51.622 INFO: Loaded 1 PC tables (388547 PCs): 388547 [0x2cb4a90,0x32a26c0), 00:07:51.622 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:51.622 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.622 #2 INITED exec/s: 0 rss: 67Mb 00:07:51.622 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.622 This may also happen if the target rejected all inputs we tried so far 00:07:51.622 [2024-12-12 10:10:05.055418] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:51.622 [2024-12-12 10:10:05.123571] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:51.881 NEW_FUNC[1/677]: 0x43c598 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:51.881 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:51.881 #42 NEW cov: 11234 ft: 11190 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 5 InsertByte-CopyPart-ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:07:52.139 [2024-12-12 10:10:05.605568] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:52.139 #43 NEW cov: 11248 ft: 14652 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:52.398 [2024-12-12 10:10:05.799167] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:52.398 NEW_FUNC[1/1]: 0x1c2bb78 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:52.398 #44 NEW cov: 11265 ft: 15936 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 CrossOver- 00:07:52.398 [2024-12-12 10:10:05.997863] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:52.657 #45 NEW cov: 11265 ft: 16328 corp: 5/33b lim: 8 exec/s: 45 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:07:52.657 [2024-12-12 10:10:06.189933] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:52.916 #51 NEW cov: 11265 ft: 17131 corp: 6/41b lim: 8 exec/s: 51 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:07:52.916 [2024-12-12 10:10:06.372810] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:52.916 [2024-12-12 10:10:06.372843] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:52.916 NEW_FUNC[1/1]: 0x1598348 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3131 00:07:52.916 #52 NEW cov: 11275 ft: 17274 corp: 7/49b lim: 8 exec/s: 52 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:07:53.175 [2024-12-12 10:10:06.570865] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:53.175 [2024-12-12 10:10:06.570894] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:53.175 #53 NEW cov: 11275 ft: 17315 corp: 8/57b lim: 8 exec/s: 53 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:53.175 [2024-12-12 10:10:06.762768] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:53.433 #54 NEW cov: 11282 ft: 17373 corp: 9/65b lim: 8 exec/s: 54 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:53.433 [2024-12-12 10:10:06.949173] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:53.434 #55 NEW cov: 11282 ft: 17413 corp: 10/73b lim: 8 exec/s: 27 rss: 77Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:53.434 #55 DONE cov: 11282 ft: 17413 corp: 10/73b lim: 8 exec/s: 27 rss: 77Mb 00:07:53.434 Done 55 runs in 2 second(s) 00:07:53.693 [2024-12-12 10:10:07.075916] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:53.693 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:53.693 10:10:07 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:53.953 [2024-12-12 10:10:07.342067] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:53.953 [2024-12-12 10:10:07.342150] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482220 ] 00:07:53.953 [2024-12-12 10:10:07.437839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.953 [2024-12-12 10:10:07.478696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.212 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.212 INFO: Seed: 2565871530 00:07:54.212 INFO: Loaded 1 modules (388547 inline 8-bit counters): 388547 [0x2c55ccc, 0x2cb4a8f), 00:07:54.212 INFO: Loaded 1 PC tables (388547 PCs): 388547 [0x2cb4a90,0x32a26c0), 00:07:54.212 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:54.212 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.212 #2 INITED exec/s: 0 rss: 67Mb 00:07:54.212 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:54.212 This may also happen if the target rejected all inputs we tried so far 00:07:54.212 [2024-12-12 10:10:07.736127] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:54.730 NEW_FUNC[1/677]: 0x43cc88 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:54.730 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:54.730 #130 NEW cov: 11238 ft: 11206 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:54.989 #131 NEW cov: 11253 ft: 14802 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:54.989 NEW_FUNC[1/1]: 0x1c2bb78 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:54.989 #132 NEW cov: 11270 ft: 15532 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:55.248 #133 NEW cov: 11273 ft: 16076 corp: 5/129b lim: 32 exec/s: 133 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:55.506 #139 NEW cov: 11273 ft: 16439 corp: 6/161b lim: 32 exec/s: 139 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:55.506 #143 NEW cov: 11273 ft: 16684 corp: 7/193b lim: 32 exec/s: 143 rss: 76Mb L: 32/32 MS: 4 EraseBytes-CrossOver-InsertByte-InsertByte- 00:07:55.765 #144 NEW cov: 11273 ft: 16754 corp: 8/225b lim: 32 exec/s: 144 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:07:56.024 #145 NEW cov: 11280 ft: 17153 corp: 9/257b lim: 32 exec/s: 145 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:07:56.283 #146 NEW cov: 11280 ft: 17169 corp: 10/289b lim: 32 exec/s: 146 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:07:56.283 #147 NEW cov: 11280 ft: 17380 corp: 11/321b lim: 32 exec/s: 73 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:56.283 #147 DONE cov: 11280 ft: 17380 corp: 11/321b lim: 32 exec/s: 73 rss: 76Mb 00:07:56.283 Done 147 runs in 2 second(s) 00:07:56.283 [2024-12-12 10:10:09.895918] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:56.542 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:56.542 10:10:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:56.542 [2024-12-12 10:10:10.162094] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:56.542 [2024-12-12 10:10:10.162161] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482754 ] 00:07:56.801 [2024-12-12 10:10:10.255338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.801 [2024-12-12 10:10:10.295962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.060 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.060 INFO: Seed: 1082904077 00:07:57.060 INFO: Loaded 1 modules (388547 inline 8-bit counters): 388547 [0x2c55ccc, 0x2cb4a8f), 00:07:57.060 INFO: Loaded 1 PC tables (388547 PCs): 388547 [0x2cb4a90,0x32a26c0), 00:07:57.060 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:57.060 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.060 #2 INITED exec/s: 0 rss: 67Mb 00:07:57.060 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:57.060 This may also happen if the target rejected all inputs we tried so far 00:07:57.060 [2024-12-12 10:10:10.543207] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:57.577 NEW_FUNC[1/677]: 0x43d508 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:57.577 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:57.577 #91 NEW cov: 11236 ft: 10976 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 ChangeBit-InsertByte-InsertRepeatedBytes-CMP- DE: "\377\377\377\377\377\000]\353"- 00:07:57.577 #97 NEW cov: 11252 ft: 14117 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:07:57.836 NEW_FUNC[1/1]: 0x1c2bb78 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:07:57.836 #113 NEW cov: 11269 ft: 16201 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:58.095 #124 NEW cov: 11269 ft: 16503 corp: 5/129b lim: 32 exec/s: 124 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:58.354 #130 NEW cov: 11269 ft: 17276 corp: 6/161b lim: 32 exec/s: 130 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:07:58.354 #131 NEW cov: 11269 ft: 17633 corp: 7/193b lim: 32 exec/s: 131 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:07:58.613 #132 NEW cov: 11269 ft: 17707 corp: 8/225b lim: 32 exec/s: 132 rss: 77Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\000]\353"- 00:07:58.871 #133 NEW cov: 11269 ft: 17785 corp: 9/257b lim: 32 exec/s: 133 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:58.871 #134 NEW cov: 11276 ft: 17819 corp: 10/289b lim: 32 exec/s: 134 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:07:59.130 #135 NEW cov: 11276 ft: 18086 corp: 11/321b lim: 32 exec/s: 67 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:59.130 #135 DONE cov: 11276 ft: 18086 corp: 11/321b lim: 32 exec/s: 67 rss: 77Mb 00:07:59.130 ###### Recommended dictionary. ###### 00:07:59.130 "\377\377\377\377\377\000]\353" # Uses: 1 00:07:59.130 ###### End of recommended dictionary. ###### 00:07:59.130 Done 135 runs in 2 second(s) 00:07:59.130 [2024-12-12 10:10:12.629916] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:59.390 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:59.390 10:10:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:59.390 [2024-12-12 10:10:12.896642] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:07:59.390 [2024-12-12 10:10:12.896733] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483296 ] 00:07:59.390 [2024-12-12 10:10:12.993395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.649 [2024-12-12 10:10:13.034220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.649 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.649 INFO: Seed: 3817894227 00:07:59.649 INFO: Loaded 1 modules (388547 inline 8-bit counters): 388547 [0x2c55ccc, 0x2cb4a8f), 00:07:59.649 INFO: Loaded 1 PC tables (388547 PCs): 388547 [0x2cb4a90,0x32a26c0), 00:07:59.649 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:59.649 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.649 #2 INITED exec/s: 0 rss: 67Mb 00:07:59.649 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.649 This may also happen if the target rejected all inputs we tried so far 00:07:59.649 [2024-12-12 10:10:13.278726] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:59.908 [2024-12-12 10:10:13.347935] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:59.908 [2024-12-12 10:10:13.347973] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.167 NEW_FUNC[1/677]: 0x43df08 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:00.167 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:00.167 #47 NEW cov: 11251 ft: 10909 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 5 InsertRepeatedBytes-ShuffleBytes-ShuffleBytes-ChangeBinInt-CopyPart- 00:08:00.425 [2024-12-12 10:10:13.833890] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.425 [2024-12-12 10:10:13.833932] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.425 NEW_FUNC[1/1]: 0x18beb88 in nvme_pcie_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_pcie_internal.h:207 00:08:00.425 #53 NEW cov: 11267 ft: 14105 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:08:00.425 [2024-12-12 10:10:14.022923] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.425 [2024-12-12 10:10:14.022955] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.684 NEW_FUNC[1/1]: 0x1c2bb78 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:00.684 #54 NEW cov: 11284 ft: 14429 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:08:00.685 [2024-12-12 10:10:14.203537] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.685 [2024-12-12 10:10:14.203567] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.685 #55 NEW cov: 11284 ft: 15793 corp: 5/53b lim: 13 exec/s: 55 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:00.943 [2024-12-12 10:10:14.385191] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.943 [2024-12-12 10:10:14.385220] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:00.943 #61 NEW cov: 11284 ft: 15959 corp: 6/66b lim: 13 exec/s: 61 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:00.943 [2024-12-12 10:10:14.574849] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:00.943 [2024-12-12 10:10:14.574879] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.202 #72 NEW cov: 11284 ft: 16522 corp: 7/79b lim: 13 exec/s: 72 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:08:01.202 [2024-12-12 10:10:14.762495] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.202 [2024-12-12 10:10:14.762526] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.461 #75 NEW cov: 11284 ft: 16873 corp: 8/92b lim: 13 exec/s: 75 rss: 77Mb L: 13/13 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:08:01.461 [2024-12-12 10:10:14.963214] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.461 [2024-12-12 10:10:14.963245] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.461 #76 NEW cov: 11291 ft: 16974 corp: 9/105b lim: 13 exec/s: 76 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:08:01.719 [2024-12-12 10:10:15.146401] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.719 [2024-12-12 10:10:15.146431] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.719 #77 NEW cov: 11291 ft: 17095 corp: 10/118b lim: 13 exec/s: 77 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:01.719 [2024-12-12 10:10:15.328663] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:01.719 [2024-12-12 10:10:15.328694] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:01.978 #78 NEW cov: 11291 ft: 17113 corp: 11/131b lim: 13 exec/s: 39 rss: 77Mb L: 13/13 MS: 1 CopyPart- 00:08:01.978 #78 DONE cov: 11291 ft: 17113 corp: 11/131b lim: 13 exec/s: 39 rss: 77Mb 00:08:01.978 Done 78 runs in 2 second(s) 00:08:01.978 [2024-12-12 10:10:15.453915] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:02.237 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:02.237 10:10:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:02.237 [2024-12-12 10:10:15.722230] Starting SPDK v25.01-pre git sha1 44c641464 / DPDK 24.03.0 initialization... 00:08:02.237 [2024-12-12 10:10:15.722295] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483831 ] 00:08:02.237 [2024-12-12 10:10:15.816193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.237 [2024-12-12 10:10:15.855854] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.496 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.496 INFO: Seed: 2339915536 00:08:02.496 INFO: Loaded 1 modules (388547 inline 8-bit counters): 388547 [0x2c55ccc, 0x2cb4a8f), 00:08:02.496 INFO: Loaded 1 PC tables (388547 PCs): 388547 [0x2cb4a90,0x32a26c0), 00:08:02.496 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:02.496 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.496 #2 INITED exec/s: 0 rss: 67Mb 00:08:02.496 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.496 This may also happen if the target rejected all inputs we tried so far 00:08:02.496 [2024-12-12 10:10:16.093918] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:02.496 [2024-12-12 10:10:16.121755] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:02.496 [2024-12-12 10:10:16.121787] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.014 NEW_FUNC[1/678]: 0x43ebf8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:03.014 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:03.014 #11 NEW cov: 11242 ft: 11128 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 InsertRepeatedBytes-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:03.014 [2024-12-12 10:10:16.561055] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.014 [2024-12-12 10:10:16.561094] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.014 #12 NEW cov: 11259 ft: 13667 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:08:03.273 [2024-12-12 10:10:16.683975] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.273 [2024-12-12 10:10:16.684008] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.273 #18 NEW cov: 11259 ft: 13827 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:03.273 [2024-12-12 10:10:16.797135] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.273 [2024-12-12 10:10:16.797170] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.273 NEW_FUNC[1/1]: 0x1c2bb78 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:652 00:08:03.273 #19 NEW cov: 11276 ft: 15595 corp: 5/37b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:03.532 [2024-12-12 10:10:16.920344] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.532 [2024-12-12 10:10:16.920376] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.532 #20 NEW cov: 11276 ft: 16228 corp: 6/46b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:03.532 [2024-12-12 10:10:17.033490] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.532 [2024-12-12 10:10:17.033523] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.532 #21 NEW cov: 11276 ft: 17098 corp: 7/55b lim: 9 exec/s: 21 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:08:03.532 [2024-12-12 10:10:17.155369] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.532 [2024-12-12 10:10:17.155401] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.791 #22 NEW cov: 11276 ft: 17459 corp: 8/64b lim: 9 exec/s: 22 rss: 76Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:03.791 [2024-12-12 10:10:17.287398] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.791 [2024-12-12 10:10:17.287432] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:03.791 #28 NEW cov: 11276 ft: 17579 corp: 9/73b lim: 9 exec/s: 28 rss: 77Mb L: 9/9 MS: 1 ChangeByte- 00:08:03.791 [2024-12-12 10:10:17.399329] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:03.791 [2024-12-12 10:10:17.399362] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.050 #29 NEW cov: 11276 ft: 17592 corp: 10/82b lim: 9 exec/s: 29 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:08:04.050 [2024-12-12 10:10:17.521121] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.050 [2024-12-12 10:10:17.521154] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.050 #30 NEW cov: 11276 ft: 17604 corp: 11/91b lim: 9 exec/s: 30 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:08:04.050 [2024-12-12 10:10:17.634121] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.050 [2024-12-12 10:10:17.634153] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.309 #31 NEW cov: 11276 ft: 17914 corp: 12/100b lim: 9 exec/s: 31 rss: 77Mb L: 9/9 MS: 1 CMP- DE: "\000\000\000\000"- 00:08:04.309 [2024-12-12 10:10:17.747142] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.309 [2024-12-12 10:10:17.747176] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.309 #32 NEW cov: 11276 ft: 18077 corp: 13/109b lim: 9 exec/s: 32 rss: 77Mb L: 9/9 MS: 1 ChangeByte- 00:08:04.309 [2024-12-12 10:10:17.870139] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.309 [2024-12-12 10:10:17.870171] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.309 #33 NEW cov: 11283 ft: 18242 corp: 14/118b lim: 9 exec/s: 33 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:08:04.569 [2024-12-12 10:10:17.983153] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:04.569 [2024-12-12 10:10:17.983183] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:04.569 #39 NEW cov: 11283 ft: 18244 corp: 15/127b lim: 9 exec/s: 19 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:08:04.569 #39 DONE cov: 11283 ft: 18244 corp: 15/127b lim: 9 exec/s: 19 rss: 77Mb 00:08:04.569 ###### Recommended dictionary. ###### 00:08:04.569 "\000\000\000\000" # Uses: 0 00:08:04.569 ###### End of recommended dictionary. ###### 00:08:04.569 Done 39 runs in 2 second(s) 00:08:04.569 [2024-12-12 10:10:18.077913] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:04.828 10:10:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:04.828 10:10:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.828 10:10:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.828 10:10:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:04.828 00:08:04.828 real 0m19.669s 00:08:04.828 user 0m27.292s 00:08:04.828 sys 0m2.001s 00:08:04.829 10:10:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.829 10:10:18 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 ************************************ 00:08:04.829 END TEST vfio_llvm_fuzz 00:08:04.829 ************************************ 00:08:04.829 00:08:04.829 real 1m26.063s 00:08:04.829 user 2m7.796s 00:08:04.829 sys 0m11.562s 00:08:04.829 10:10:18 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.829 10:10:18 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 ************************************ 00:08:04.829 END TEST llvm_fuzz 00:08:04.829 ************************************ 00:08:04.829 10:10:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:08:04.829 10:10:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:08:04.829 10:10:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:08:04.829 10:10:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.829 10:10:18 -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 10:10:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:08:04.829 10:10:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:08:04.829 10:10:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:08:04.829 10:10:18 -- common/autotest_common.sh@10 -- # set +x 00:08:11.400 INFO: APP EXITING 00:08:11.400 INFO: killing all VMs 00:08:11.400 INFO: killing vhost app 00:08:11.400 INFO: EXIT DONE 00:08:14.692 Waiting for block devices as requested 00:08:14.692 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:14.692 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:14.951 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:14.951 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:14.951 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:15.211 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:15.211 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:15.211 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:15.470 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:15.470 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:15.470 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:15.729 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:15.729 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:15.729 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:15.988 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:15.988 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:15.988 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:20.183 Cleaning 00:08:20.183 Removing: /dev/shm/spdk_tgt_trace.pid455063 00:08:20.183 Removing: /var/run/dpdk/spdk_pid452602 00:08:20.183 Removing: /var/run/dpdk/spdk_pid453798 00:08:20.183 Removing: /var/run/dpdk/spdk_pid455063 00:08:20.183 Removing: /var/run/dpdk/spdk_pid455534 00:08:20.183 Removing: /var/run/dpdk/spdk_pid456611 00:08:20.183 Removing: /var/run/dpdk/spdk_pid456718 00:08:20.183 Removing: /var/run/dpdk/spdk_pid457740 00:08:20.183 Removing: /var/run/dpdk/spdk_pid457760 00:08:20.183 Removing: /var/run/dpdk/spdk_pid458187 00:08:20.183 Removing: /var/run/dpdk/spdk_pid458523 00:08:20.183 Removing: /var/run/dpdk/spdk_pid458839 00:08:20.183 Removing: /var/run/dpdk/spdk_pid459177 00:08:20.183 Removing: /var/run/dpdk/spdk_pid459501 00:08:20.183 Removing: /var/run/dpdk/spdk_pid459669 00:08:20.183 Removing: /var/run/dpdk/spdk_pid459834 00:08:20.183 Removing: /var/run/dpdk/spdk_pid460147 00:08:20.183 Removing: /var/run/dpdk/spdk_pid460994 00:08:20.183 Removing: /var/run/dpdk/spdk_pid464137 00:08:20.183 Removing: /var/run/dpdk/spdk_pid464376 00:08:20.183 Removing: /var/run/dpdk/spdk_pid464532 00:08:20.183 Removing: /var/run/dpdk/spdk_pid464727 00:08:20.183 Removing: /var/run/dpdk/spdk_pid465301 00:08:20.183 Removing: /var/run/dpdk/spdk_pid465333 00:08:20.183 Removing: /var/run/dpdk/spdk_pid465901 00:08:20.183 Removing: /var/run/dpdk/spdk_pid465910 00:08:20.183 Removing: /var/run/dpdk/spdk_pid466207 00:08:20.183 Removing: /var/run/dpdk/spdk_pid466377 00:08:20.183 Removing: /var/run/dpdk/spdk_pid466514 00:08:20.183 Removing: /var/run/dpdk/spdk_pid466658 00:08:20.183 Removing: /var/run/dpdk/spdk_pid467152 00:08:20.183 Removing: /var/run/dpdk/spdk_pid467432 00:08:20.183 Removing: /var/run/dpdk/spdk_pid467718 00:08:20.183 Removing: /var/run/dpdk/spdk_pid467854 00:08:20.183 Removing: /var/run/dpdk/spdk_pid468550 00:08:20.183 Removing: /var/run/dpdk/spdk_pid469084 00:08:20.183 Removing: /var/run/dpdk/spdk_pid469395 00:08:20.183 Removing: /var/run/dpdk/spdk_pid469969 00:08:20.183 Removing: /var/run/dpdk/spdk_pid470572 00:08:20.183 Removing: /var/run/dpdk/spdk_pid471427 00:08:20.183 Removing: /var/run/dpdk/spdk_pid471824 00:08:20.183 Removing: /var/run/dpdk/spdk_pid472357 00:08:20.183 Removing: /var/run/dpdk/spdk_pid472825 00:08:20.183 Removing: /var/run/dpdk/spdk_pid473179 00:08:20.183 Removing: /var/run/dpdk/spdk_pid473716 00:08:20.183 Removing: /var/run/dpdk/spdk_pid474246 00:08:20.183 Removing: /var/run/dpdk/spdk_pid474595 00:08:20.183 Removing: /var/run/dpdk/spdk_pid475067 00:08:20.183 Removing: /var/run/dpdk/spdk_pid475607 00:08:20.183 Removing: /var/run/dpdk/spdk_pid476041 00:08:20.183 Removing: /var/run/dpdk/spdk_pid476428 00:08:20.183 Removing: /var/run/dpdk/spdk_pid476967 00:08:20.183 Removing: /var/run/dpdk/spdk_pid477489 00:08:20.183 Removing: /var/run/dpdk/spdk_pid477787 00:08:20.183 Removing: /var/run/dpdk/spdk_pid478316 00:08:20.183 Removing: /var/run/dpdk/spdk_pid478856 00:08:20.183 Removing: /var/run/dpdk/spdk_pid479199 00:08:20.183 Removing: /var/run/dpdk/spdk_pid479677 00:08:20.183 Removing: /var/run/dpdk/spdk_pid480226 00:08:20.183 Removing: /var/run/dpdk/spdk_pid480855 00:08:20.183 Removing: /var/run/dpdk/spdk_pid481392 00:08:20.183 Removing: /var/run/dpdk/spdk_pid481827 00:08:20.183 Removing: /var/run/dpdk/spdk_pid482220 00:08:20.183 Removing: /var/run/dpdk/spdk_pid482754 00:08:20.183 Removing: /var/run/dpdk/spdk_pid483296 00:08:20.183 Removing: /var/run/dpdk/spdk_pid483831 00:08:20.183 Clean 00:08:20.183 10:10:33 -- common/autotest_common.sh@1453 -- # return 0 00:08:20.183 10:10:33 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:08:20.183 10:10:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.183 10:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:20.183 10:10:33 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:08:20.183 10:10:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.183 10:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:20.183 10:10:33 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:20.183 10:10:33 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:20.183 10:10:33 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:20.183 10:10:33 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:08:20.183 10:10:33 -- spdk/autotest.sh@398 -- # hostname 00:08:20.183 10:10:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:20.183 geninfo: WARNING: invalid characters removed from testname! 00:08:25.458 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:08:30.734 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:08:33.271 10:10:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:41.392 10:10:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:45.595 10:10:59 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:50.870 10:11:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:56.142 10:11:09 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:01.414 10:11:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:06.693 10:11:20 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:06.693 10:11:20 -- spdk/autorun.sh@1 -- $ timing_finish 00:09:06.693 10:11:20 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:09:06.693 10:11:20 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:06.693 10:11:20 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:09:06.693 10:11:20 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:06.952 + [[ -n 340302 ]] 00:09:06.952 + sudo kill 340302 00:09:06.962 [Pipeline] } 00:09:06.977 [Pipeline] // stage 00:09:06.983 [Pipeline] } 00:09:06.998 [Pipeline] // timeout 00:09:07.003 [Pipeline] } 00:09:07.017 [Pipeline] // catchError 00:09:07.022 [Pipeline] } 00:09:07.037 [Pipeline] // wrap 00:09:07.043 [Pipeline] } 00:09:07.056 [Pipeline] // catchError 00:09:07.065 [Pipeline] stage 00:09:07.068 [Pipeline] { (Epilogue) 00:09:07.080 [Pipeline] catchError 00:09:07.082 [Pipeline] { 00:09:07.095 [Pipeline] echo 00:09:07.097 Cleanup processes 00:09:07.103 [Pipeline] sh 00:09:07.390 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:07.390 492290 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:07.405 [Pipeline] sh 00:09:07.691 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:07.691 ++ grep -v 'sudo pgrep' 00:09:07.691 ++ awk '{print $1}' 00:09:07.691 + sudo kill -9 00:09:07.691 + true 00:09:07.703 [Pipeline] sh 00:09:07.988 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:07.988 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:07.988 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:09.366 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:21.622 [Pipeline] sh 00:09:21.911 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:21.911 Artifacts sizes are good 00:09:21.926 [Pipeline] archiveArtifacts 00:09:21.934 Archiving artifacts 00:09:22.082 [Pipeline] sh 00:09:22.372 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:22.389 [Pipeline] cleanWs 00:09:22.400 [WS-CLEANUP] Deleting project workspace... 00:09:22.400 [WS-CLEANUP] Deferred wipeout is used... 00:09:22.407 [WS-CLEANUP] done 00:09:22.410 [Pipeline] } 00:09:22.428 [Pipeline] // catchError 00:09:22.443 [Pipeline] sh 00:09:22.730 + logger -p user.info -t JENKINS-CI 00:09:22.739 [Pipeline] } 00:09:22.752 [Pipeline] // stage 00:09:22.757 [Pipeline] } 00:09:22.771 [Pipeline] // node 00:09:22.776 [Pipeline] End of Pipeline 00:09:22.813 Finished: SUCCESS