00:00:00.001 Started by upstream project "autotest-per-patch" build number 132704 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.027 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.563 The recommended git tool is: git 00:00:02.564 using credential 00000000-0000-0000-0000-000000000002 00:00:02.565 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.583 Fetching changes from the remote Git repository 00:00:02.586 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.599 Using shallow fetch with depth 1 00:00:02.599 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.599 > git --version # timeout=10 00:00:02.609 > git --version # 'git version 2.39.2' 00:00:02.610 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.621 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.621 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.114 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.130 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.146 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.146 > git config core.sparsecheckout # timeout=10 00:00:08.159 > git read-tree -mu HEAD # timeout=10 00:00:08.176 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.201 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.201 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.282 [Pipeline] Start of Pipeline 00:00:08.298 [Pipeline] library 00:00:08.299 Loading library shm_lib@master 00:00:08.299 Library shm_lib@master is cached. Copying from home. 00:00:08.313 [Pipeline] node 00:00:08.334 Running on WFP29 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:08.335 [Pipeline] { 00:00:08.343 [Pipeline] catchError 00:00:08.345 [Pipeline] { 00:00:08.353 [Pipeline] wrap 00:00:08.361 [Pipeline] { 00:00:08.366 [Pipeline] stage 00:00:08.367 [Pipeline] { (Prologue) 00:00:08.607 [Pipeline] sh 00:00:08.891 + logger -p user.info -t JENKINS-CI 00:00:08.908 [Pipeline] echo 00:00:08.909 Node: WFP29 00:00:08.916 [Pipeline] sh 00:00:09.215 [Pipeline] setCustomBuildProperty 00:00:09.226 [Pipeline] echo 00:00:09.227 Cleanup processes 00:00:09.231 [Pipeline] sh 00:00:09.516 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:09.516 1724673 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:09.527 [Pipeline] sh 00:00:09.805 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:09.805 ++ grep -v 'sudo pgrep' 00:00:09.805 ++ awk '{print $1}' 00:00:09.805 + sudo kill -9 00:00:09.805 + true 00:00:09.819 [Pipeline] cleanWs 00:00:09.827 [WS-CLEANUP] Deleting project workspace... 00:00:09.827 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.834 [WS-CLEANUP] done 00:00:09.838 [Pipeline] setCustomBuildProperty 00:00:09.850 [Pipeline] sh 00:00:10.130 + sudo git config --global --replace-all safe.directory '*' 00:00:10.261 [Pipeline] httpRequest 00:00:10.831 [Pipeline] echo 00:00:10.833 Sorcerer 10.211.164.20 is alive 00:00:10.842 [Pipeline] retry 00:00:10.843 [Pipeline] { 00:00:10.854 [Pipeline] httpRequest 00:00:10.858 HttpMethod: GET 00:00:10.858 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.858 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.864 Response Code: HTTP/1.1 200 OK 00:00:10.865 Success: Status code 200 is in the accepted range: 200,404 00:00:10.865 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.228 [Pipeline] } 00:00:26.243 [Pipeline] // retry 00:00:26.250 [Pipeline] sh 00:00:26.529 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.542 [Pipeline] httpRequest 00:00:27.262 [Pipeline] echo 00:00:27.264 Sorcerer 10.211.164.20 is alive 00:00:27.273 [Pipeline] retry 00:00:27.274 [Pipeline] { 00:00:27.286 [Pipeline] httpRequest 00:00:27.290 HttpMethod: GET 00:00:27.290 URL: http://10.211.164.20/packages/spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:00:27.291 Sending request to url: http://10.211.164.20/packages/spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:00:27.305 Response Code: HTTP/1.1 200 OK 00:00:27.305 Success: Status code 200 is in the accepted range: 200,404 00:00:27.305 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:02:54.358 [Pipeline] } 00:02:54.375 [Pipeline] // retry 00:02:54.382 [Pipeline] sh 00:02:54.663 + tar --no-same-owner -xf spdk_2c140f58ffe19fb26bb9d25f4df8ac7937a32557.tar.gz 00:02:57.202 [Pipeline] sh 00:02:57.483 + git -C spdk log --oneline -n5 00:02:57.483 2c140f58f nvme/rdma: Support accel sequence 00:02:57.483 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:02:57.483 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:02:57.483 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:02:57.483 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:02:57.493 [Pipeline] } 00:02:57.507 [Pipeline] // stage 00:02:57.517 [Pipeline] stage 00:02:57.520 [Pipeline] { (Prepare) 00:02:57.538 [Pipeline] writeFile 00:02:57.556 [Pipeline] sh 00:02:57.839 + logger -p user.info -t JENKINS-CI 00:02:57.852 [Pipeline] sh 00:02:58.134 + logger -p user.info -t JENKINS-CI 00:02:58.147 [Pipeline] sh 00:02:58.429 + cat autorun-spdk.conf 00:02:58.429 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.429 SPDK_TEST_FUZZER_SHORT=1 00:02:58.429 SPDK_TEST_FUZZER=1 00:02:58.429 SPDK_TEST_SETUP=1 00:02:58.429 SPDK_RUN_UBSAN=1 00:02:58.435 RUN_NIGHTLY=0 00:02:58.441 [Pipeline] readFile 00:02:58.467 [Pipeline] withEnv 00:02:58.470 [Pipeline] { 00:02:58.483 [Pipeline] sh 00:02:58.766 + set -ex 00:02:58.766 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:02:58.766 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:58.766 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:58.766 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:58.766 ++ SPDK_TEST_FUZZER=1 00:02:58.766 ++ SPDK_TEST_SETUP=1 00:02:58.766 ++ SPDK_RUN_UBSAN=1 00:02:58.766 ++ RUN_NIGHTLY=0 00:02:58.766 + case $SPDK_TEST_NVMF_NICS in 00:02:58.766 + DRIVERS= 00:02:58.766 + [[ -n '' ]] 00:02:58.766 + exit 0 00:02:58.775 [Pipeline] } 00:02:58.790 [Pipeline] // withEnv 00:02:58.796 [Pipeline] } 00:02:58.810 [Pipeline] // stage 00:02:58.821 [Pipeline] catchError 00:02:58.823 [Pipeline] { 00:02:58.837 [Pipeline] timeout 00:02:58.838 Timeout set to expire in 30 min 00:02:58.840 [Pipeline] { 00:02:58.855 [Pipeline] stage 00:02:58.858 [Pipeline] { (Tests) 00:02:58.873 [Pipeline] sh 00:02:59.157 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.157 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.157 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.157 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:02:59.157 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:59.157 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:59.157 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:02:59.157 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:59.157 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:59.157 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:59.157 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:02:59.157 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:59.157 + source /etc/os-release 00:02:59.157 ++ NAME='Fedora Linux' 00:02:59.157 ++ VERSION='39 (Cloud Edition)' 00:02:59.157 ++ ID=fedora 00:02:59.157 ++ VERSION_ID=39 00:02:59.157 ++ VERSION_CODENAME= 00:02:59.157 ++ PLATFORM_ID=platform:f39 00:02:59.157 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:59.157 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:59.157 ++ LOGO=fedora-logo-icon 00:02:59.157 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:59.157 ++ HOME_URL=https://fedoraproject.org/ 00:02:59.157 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:59.157 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:59.157 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:59.157 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:59.157 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:59.157 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:59.157 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:59.157 ++ SUPPORT_END=2024-11-12 00:02:59.157 ++ VARIANT='Cloud Edition' 00:02:59.157 ++ VARIANT_ID=cloud 00:02:59.157 + uname -a 00:02:59.157 Linux spdk-wfp-29 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:59.157 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:02.440 Hugepages 00:03:02.440 node hugesize free / total 00:03:02.440 node0 1048576kB 0 / 0 00:03:02.440 node0 2048kB 0 / 0 00:03:02.440 node1 1048576kB 0 / 0 00:03:02.440 node1 2048kB 0 / 0 00:03:02.440 00:03:02.440 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.440 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:02.440 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:02.440 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:02.440 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:02.440 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:02.440 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:02.440 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:02.440 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:02.440 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:02.440 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:02.440 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:02.440 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:02.440 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:02.440 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:02.440 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:02.440 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:02.440 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:02.698 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:03:02.698 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:03:02.698 + rm -f /tmp/spdk-ld-path 00:03:02.698 + source autorun-spdk.conf 00:03:02.698 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.698 ++ SPDK_TEST_FUZZER_SHORT=1 00:03:02.698 ++ SPDK_TEST_FUZZER=1 00:03:02.698 ++ SPDK_TEST_SETUP=1 00:03:02.698 ++ SPDK_RUN_UBSAN=1 00:03:02.698 ++ RUN_NIGHTLY=0 00:03:02.698 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:02.698 + [[ -n '' ]] 00:03:02.698 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:02.698 + for M in /var/spdk/build-*-manifest.txt 00:03:02.698 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:02.698 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:02.698 + for M in /var/spdk/build-*-manifest.txt 00:03:02.698 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:02.698 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:02.698 + for M in /var/spdk/build-*-manifest.txt 00:03:02.698 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:02.698 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:02.698 ++ uname 00:03:02.698 + [[ Linux == \L\i\n\u\x ]] 00:03:02.698 + sudo dmesg -T 00:03:02.698 + sudo dmesg --clear 00:03:02.698 + dmesg_pid=1726199 00:03:02.698 + [[ Fedora Linux == FreeBSD ]] 00:03:02.698 + sudo dmesg -Tw 00:03:02.698 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.698 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.698 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:02.698 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:02.698 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:02.698 + [[ -x /usr/src/fio-static/fio ]] 00:03:02.698 + export FIO_BIN=/usr/src/fio-static/fio 00:03:02.698 + FIO_BIN=/usr/src/fio-static/fio 00:03:02.698 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:02.698 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:02.698 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:02.698 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.698 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.698 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:02.698 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.698 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.698 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:02.955 20:22:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:02.955 20:22:56 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:02.955 20:22:56 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.955 20:22:56 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:03:02.955 20:22:56 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:03:02.955 20:22:56 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:03:02.955 20:22:56 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:03:02.955 20:22:56 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:03:02.955 20:22:56 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:02.955 20:22:56 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:02.955 20:22:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:02.955 20:22:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:02.955 20:22:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:02.955 20:22:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:02.955 20:22:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.955 20:22:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.955 20:22:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.955 20:22:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.955 20:22:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.955 20:22:56 -- paths/export.sh@5 -- $ export PATH 00:03:02.955 20:22:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.955 20:22:56 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:02.955 20:22:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:02.955 20:22:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733426576.XXXXXX 00:03:02.955 20:22:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733426576.TLXpE4 00:03:02.955 20:22:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:02.955 20:22:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:02.955 20:22:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:03:02.955 20:22:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:02.955 20:22:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:02.955 20:22:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:02.955 20:22:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:02.955 20:22:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.955 20:22:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:02.955 20:22:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:02.955 20:22:56 -- pm/common@17 -- $ local monitor 00:03:02.955 20:22:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.955 20:22:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.955 20:22:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.955 20:22:56 -- pm/common@21 -- $ date +%s 00:03:02.955 20:22:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.955 20:22:56 -- pm/common@21 -- $ date +%s 00:03:02.955 20:22:56 -- pm/common@25 -- $ sleep 1 00:03:02.955 20:22:56 -- pm/common@21 -- $ date +%s 00:03:02.955 20:22:56 -- pm/common@21 -- $ date +%s 00:03:02.955 20:22:56 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426576 00:03:02.956 20:22:56 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426576 00:03:02.956 20:22:56 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426576 00:03:02.956 20:22:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733426576 00:03:02.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426576_collect-vmstat.pm.log 00:03:02.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426576_collect-cpu-load.pm.log 00:03:02.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426576_collect-cpu-temp.pm.log 00:03:02.956 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733426576_collect-bmc-pm.bmc.pm.log 00:03:03.889 20:22:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:03.889 20:22:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:03.889 20:22:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:03.889 20:22:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.889 20:22:57 -- spdk/autobuild.sh@16 -- $ date -u 00:03:03.889 Thu Dec 5 07:22:57 PM UTC 2024 00:03:03.889 20:22:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:03.889 v25.01-pre-297-g2c140f58f 00:03:03.889 20:22:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:03.889 20:22:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:03.889 20:22:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:03.889 20:22:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:03.889 20:22:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.889 20:22:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.146 ************************************ 00:03:04.146 START TEST ubsan 00:03:04.146 ************************************ 00:03:04.146 20:22:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:04.146 using ubsan 00:03:04.146 00:03:04.146 real 0m0.001s 00:03:04.146 user 0m0.000s 00:03:04.146 sys 0m0.000s 00:03:04.146 20:22:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:04.146 20:22:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.146 ************************************ 00:03:04.146 END TEST ubsan 00:03:04.146 ************************************ 00:03:04.146 20:22:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:04.146 20:22:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:04.146 20:22:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:04.146 20:22:57 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:03:04.146 20:22:57 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:03:04.146 20:22:57 -- common/autobuild_common.sh@445 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:03:04.146 20:22:57 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:04.146 20:22:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:04.146 20:22:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.146 ************************************ 00:03:04.146 START TEST autobuild_llvm_precompile 00:03:04.146 ************************************ 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autotest_common.sh@1129 -- $ _llvm_precompile 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:03:04.146 Target: x86_64-redhat-linux-gnu 00:03:04.146 Thread model: posix 00:03:04.146 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:03:04.146 20:22:57 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:03:04.403 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:03:04.403 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:04.970 Using 'verbs' RDMA provider 00:03:20.846 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:35.752 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:35.752 Creating mk/config.mk...done. 00:03:35.752 Creating mk/cc.flags.mk...done. 00:03:35.752 Type 'make' to build. 00:03:35.752 00:03:35.752 real 0m30.105s 00:03:35.752 user 0m13.269s 00:03:35.752 sys 0m16.232s 00:03:35.752 20:23:27 autobuild_llvm_precompile -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:35.752 20:23:27 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:03:35.752 ************************************ 00:03:35.752 END TEST autobuild_llvm_precompile 00:03:35.752 ************************************ 00:03:35.752 20:23:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:35.752 20:23:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:35.752 20:23:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:35.752 20:23:27 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:03:35.752 20:23:27 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:03:35.752 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:03:35.752 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:35.752 Using 'verbs' RDMA provider 00:03:48.221 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:58.207 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:59.032 Creating mk/config.mk...done. 00:03:59.032 Creating mk/cc.flags.mk...done. 00:03:59.032 Type 'make' to build. 00:03:59.032 20:23:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:03:59.032 20:23:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:59.032 20:23:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:59.032 20:23:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:59.032 ************************************ 00:03:59.032 START TEST make 00:03:59.032 ************************************ 00:03:59.032 20:23:52 make -- common/autotest_common.sh@1129 -- $ make -j72 00:03:59.291 make[1]: Nothing to be done for 'all'. 00:04:01.200 The Meson build system 00:04:01.200 Version: 1.5.0 00:04:01.200 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:04:01.200 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:01.200 Build type: native build 00:04:01.200 Project name: libvfio-user 00:04:01.200 Project version: 0.0.1 00:04:01.200 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:04:01.200 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:04:01.200 Host machine cpu family: x86_64 00:04:01.200 Host machine cpu: x86_64 00:04:01.200 Run-time dependency threads found: YES 00:04:01.200 Library dl found: YES 00:04:01.200 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:01.200 Run-time dependency json-c found: YES 0.17 00:04:01.200 Run-time dependency cmocka found: YES 1.1.7 00:04:01.200 Program pytest-3 found: NO 00:04:01.200 Program flake8 found: NO 00:04:01.200 Program misspell-fixer found: NO 00:04:01.200 Program restructuredtext-lint found: NO 00:04:01.200 Program valgrind found: YES (/usr/bin/valgrind) 00:04:01.200 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:01.200 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:01.200 Compiler for C supports arguments -Wwrite-strings: YES 00:04:01.200 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:01.200 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:01.200 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:01.200 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:01.200 Build targets in project: 8 00:04:01.200 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:01.200 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:01.200 00:04:01.200 libvfio-user 0.0.1 00:04:01.200 00:04:01.200 User defined options 00:04:01.200 buildtype : debug 00:04:01.200 default_library: static 00:04:01.200 libdir : /usr/local/lib 00:04:01.200 00:04:01.200 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:01.459 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:01.459 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:04:01.459 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:01.459 [3/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:01.459 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:04:01.459 [5/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:04:01.459 [6/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:04:01.459 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:01.459 [8/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:04:01.459 [9/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:01.459 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:01.459 [11/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:01.459 [12/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:01.459 [13/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:01.459 [14/36] Compiling C object test/unit_tests.p/mocks.c.o 00:04:01.459 [15/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:01.459 [16/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:04:01.459 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:04:01.459 [18/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:01.459 [19/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:04:01.459 [20/36] Compiling C object samples/null.p/null.c.o 00:04:01.459 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:01.459 [22/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:01.459 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:01.459 [24/36] Compiling C object samples/client.p/client.c.o 00:04:01.459 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:01.459 [26/36] Compiling C object samples/server.p/server.c.o 00:04:01.459 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:01.459 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:04:01.719 [29/36] Linking static target lib/libvfio-user.a 00:04:01.719 [30/36] Linking target samples/client 00:04:01.719 [31/36] Linking target test/unit_tests 00:04:01.719 [32/36] Linking target samples/shadow_ioeventfd_server 00:04:01.719 [33/36] Linking target samples/lspci 00:04:01.719 [34/36] Linking target samples/server 00:04:01.719 [35/36] Linking target samples/null 00:04:01.719 [36/36] Linking target samples/gpio-pci-idio-16 00:04:01.719 INFO: autodetecting backend as ninja 00:04:01.719 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:01.719 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:01.977 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:01.977 ninja: no work to do. 00:04:08.556 The Meson build system 00:04:08.556 Version: 1.5.0 00:04:08.557 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:04:08.557 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:04:08.557 Build type: native build 00:04:08.557 Program cat found: YES (/usr/bin/cat) 00:04:08.557 Project name: DPDK 00:04:08.557 Project version: 24.03.0 00:04:08.557 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:04:08.557 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:04:08.557 Host machine cpu family: x86_64 00:04:08.557 Host machine cpu: x86_64 00:04:08.557 Message: ## Building in Developer Mode ## 00:04:08.557 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:08.557 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:08.557 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:08.557 Program python3 found: YES (/usr/bin/python3) 00:04:08.557 Program cat found: YES (/usr/bin/cat) 00:04:08.557 Compiler for C supports arguments -march=native: YES 00:04:08.557 Checking for size of "void *" : 8 00:04:08.557 Checking for size of "void *" : 8 (cached) 00:04:08.557 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:08.557 Library m found: YES 00:04:08.557 Library numa found: YES 00:04:08.557 Has header "numaif.h" : YES 00:04:08.557 Library fdt found: NO 00:04:08.557 Library execinfo found: NO 00:04:08.557 Has header "execinfo.h" : YES 00:04:08.557 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:08.557 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:08.557 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:08.557 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:08.557 Run-time dependency openssl found: YES 3.1.1 00:04:08.557 Run-time dependency libpcap found: YES 1.10.4 00:04:08.557 Has header "pcap.h" with dependency libpcap: YES 00:04:08.557 Compiler for C supports arguments -Wcast-qual: YES 00:04:08.557 Compiler for C supports arguments -Wdeprecated: YES 00:04:08.557 Compiler for C supports arguments -Wformat: YES 00:04:08.557 Compiler for C supports arguments -Wformat-nonliteral: YES 00:04:08.557 Compiler for C supports arguments -Wformat-security: YES 00:04:08.557 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:08.557 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:08.557 Compiler for C supports arguments -Wnested-externs: YES 00:04:08.557 Compiler for C supports arguments -Wold-style-definition: YES 00:04:08.557 Compiler for C supports arguments -Wpointer-arith: YES 00:04:08.557 Compiler for C supports arguments -Wsign-compare: YES 00:04:08.557 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:08.557 Compiler for C supports arguments -Wundef: YES 00:04:08.557 Compiler for C supports arguments -Wwrite-strings: YES 00:04:08.557 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:08.557 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:04:08.557 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:08.557 Program objdump found: YES (/usr/bin/objdump) 00:04:08.557 Compiler for C supports arguments -mavx512f: YES 00:04:08.557 Checking if "AVX512 checking" compiles: YES 00:04:08.557 Fetching value of define "__SSE4_2__" : 1 00:04:08.557 Fetching value of define "__AES__" : 1 00:04:08.557 Fetching value of define "__AVX__" : 1 00:04:08.557 Fetching value of define "__AVX2__" : 1 00:04:08.557 Fetching value of define "__AVX512BW__" : 1 00:04:08.557 Fetching value of define "__AVX512CD__" : 1 00:04:08.557 Fetching value of define "__AVX512DQ__" : 1 00:04:08.557 Fetching value of define "__AVX512F__" : 1 00:04:08.557 Fetching value of define "__AVX512VL__" : 1 00:04:08.557 Fetching value of define "__PCLMUL__" : 1 00:04:08.557 Fetching value of define "__RDRND__" : 1 00:04:08.557 Fetching value of define "__RDSEED__" : 1 00:04:08.557 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:08.557 Fetching value of define "__znver1__" : (undefined) 00:04:08.557 Fetching value of define "__znver2__" : (undefined) 00:04:08.557 Fetching value of define "__znver3__" : (undefined) 00:04:08.557 Fetching value of define "__znver4__" : (undefined) 00:04:08.557 Compiler for C supports arguments -Wno-format-truncation: NO 00:04:08.557 Message: lib/log: Defining dependency "log" 00:04:08.557 Message: lib/kvargs: Defining dependency "kvargs" 00:04:08.557 Message: lib/telemetry: Defining dependency "telemetry" 00:04:08.557 Checking for function "getentropy" : NO 00:04:08.557 Message: lib/eal: Defining dependency "eal" 00:04:08.557 Message: lib/ring: Defining dependency "ring" 00:04:08.557 Message: lib/rcu: Defining dependency "rcu" 00:04:08.557 Message: lib/mempool: Defining dependency "mempool" 00:04:08.557 Message: lib/mbuf: Defining dependency "mbuf" 00:04:08.557 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:08.557 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:08.557 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:08.557 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:08.557 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:08.557 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:08.557 Compiler for C supports arguments -mpclmul: YES 00:04:08.557 Compiler for C supports arguments -maes: YES 00:04:08.557 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:08.557 Compiler for C supports arguments -mavx512bw: YES 00:04:08.557 Compiler for C supports arguments -mavx512dq: YES 00:04:08.557 Compiler for C supports arguments -mavx512vl: YES 00:04:08.557 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:08.557 Compiler for C supports arguments -mavx2: YES 00:04:08.557 Compiler for C supports arguments -mavx: YES 00:04:08.557 Message: lib/net: Defining dependency "net" 00:04:08.557 Message: lib/meter: Defining dependency "meter" 00:04:08.557 Message: lib/ethdev: Defining dependency "ethdev" 00:04:08.557 Message: lib/pci: Defining dependency "pci" 00:04:08.557 Message: lib/cmdline: Defining dependency "cmdline" 00:04:08.557 Message: lib/hash: Defining dependency "hash" 00:04:08.557 Message: lib/timer: Defining dependency "timer" 00:04:08.557 Message: lib/compressdev: Defining dependency "compressdev" 00:04:08.557 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:08.557 Message: lib/dmadev: Defining dependency "dmadev" 00:04:08.557 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:08.557 Message: lib/power: Defining dependency "power" 00:04:08.557 Message: lib/reorder: Defining dependency "reorder" 00:04:08.557 Message: lib/security: Defining dependency "security" 00:04:08.557 Has header "linux/userfaultfd.h" : YES 00:04:08.557 Has header "linux/vduse.h" : YES 00:04:08.557 Message: lib/vhost: Defining dependency "vhost" 00:04:08.557 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:04:08.557 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:08.557 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:08.557 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:08.557 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:08.557 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:08.557 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:08.557 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:08.557 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:08.557 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:08.557 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:08.557 Configuring doxy-api-html.conf using configuration 00:04:08.557 Configuring doxy-api-man.conf using configuration 00:04:08.557 Program mandb found: YES (/usr/bin/mandb) 00:04:08.557 Program sphinx-build found: NO 00:04:08.557 Configuring rte_build_config.h using configuration 00:04:08.557 Message: 00:04:08.557 ================= 00:04:08.557 Applications Enabled 00:04:08.557 ================= 00:04:08.557 00:04:08.557 apps: 00:04:08.557 00:04:08.557 00:04:08.557 Message: 00:04:08.557 ================= 00:04:08.557 Libraries Enabled 00:04:08.557 ================= 00:04:08.557 00:04:08.557 libs: 00:04:08.557 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:08.557 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:08.557 cryptodev, dmadev, power, reorder, security, vhost, 00:04:08.557 00:04:08.557 Message: 00:04:08.557 =============== 00:04:08.557 Drivers Enabled 00:04:08.557 =============== 00:04:08.557 00:04:08.557 common: 00:04:08.557 00:04:08.557 bus: 00:04:08.557 pci, vdev, 00:04:08.557 mempool: 00:04:08.557 ring, 00:04:08.557 dma: 00:04:08.557 00:04:08.557 net: 00:04:08.557 00:04:08.557 crypto: 00:04:08.557 00:04:08.557 compress: 00:04:08.557 00:04:08.557 vdpa: 00:04:08.557 00:04:08.557 00:04:08.557 Message: 00:04:08.557 ================= 00:04:08.557 Content Skipped 00:04:08.557 ================= 00:04:08.557 00:04:08.557 apps: 00:04:08.557 dumpcap: explicitly disabled via build config 00:04:08.557 graph: explicitly disabled via build config 00:04:08.557 pdump: explicitly disabled via build config 00:04:08.557 proc-info: explicitly disabled via build config 00:04:08.557 test-acl: explicitly disabled via build config 00:04:08.557 test-bbdev: explicitly disabled via build config 00:04:08.557 test-cmdline: explicitly disabled via build config 00:04:08.557 test-compress-perf: explicitly disabled via build config 00:04:08.557 test-crypto-perf: explicitly disabled via build config 00:04:08.558 test-dma-perf: explicitly disabled via build config 00:04:08.558 test-eventdev: explicitly disabled via build config 00:04:08.558 test-fib: explicitly disabled via build config 00:04:08.558 test-flow-perf: explicitly disabled via build config 00:04:08.558 test-gpudev: explicitly disabled via build config 00:04:08.558 test-mldev: explicitly disabled via build config 00:04:08.558 test-pipeline: explicitly disabled via build config 00:04:08.558 test-pmd: explicitly disabled via build config 00:04:08.558 test-regex: explicitly disabled via build config 00:04:08.558 test-sad: explicitly disabled via build config 00:04:08.558 test-security-perf: explicitly disabled via build config 00:04:08.558 00:04:08.558 libs: 00:04:08.558 argparse: explicitly disabled via build config 00:04:08.558 metrics: explicitly disabled via build config 00:04:08.558 acl: explicitly disabled via build config 00:04:08.558 bbdev: explicitly disabled via build config 00:04:08.558 bitratestats: explicitly disabled via build config 00:04:08.558 bpf: explicitly disabled via build config 00:04:08.558 cfgfile: explicitly disabled via build config 00:04:08.558 distributor: explicitly disabled via build config 00:04:08.558 efd: explicitly disabled via build config 00:04:08.558 eventdev: explicitly disabled via build config 00:04:08.558 dispatcher: explicitly disabled via build config 00:04:08.558 gpudev: explicitly disabled via build config 00:04:08.558 gro: explicitly disabled via build config 00:04:08.558 gso: explicitly disabled via build config 00:04:08.558 ip_frag: explicitly disabled via build config 00:04:08.558 jobstats: explicitly disabled via build config 00:04:08.558 latencystats: explicitly disabled via build config 00:04:08.558 lpm: explicitly disabled via build config 00:04:08.558 member: explicitly disabled via build config 00:04:08.558 pcapng: explicitly disabled via build config 00:04:08.558 rawdev: explicitly disabled via build config 00:04:08.558 regexdev: explicitly disabled via build config 00:04:08.558 mldev: explicitly disabled via build config 00:04:08.558 rib: explicitly disabled via build config 00:04:08.558 sched: explicitly disabled via build config 00:04:08.558 stack: explicitly disabled via build config 00:04:08.558 ipsec: explicitly disabled via build config 00:04:08.558 pdcp: explicitly disabled via build config 00:04:08.558 fib: explicitly disabled via build config 00:04:08.558 port: explicitly disabled via build config 00:04:08.558 pdump: explicitly disabled via build config 00:04:08.558 table: explicitly disabled via build config 00:04:08.558 pipeline: explicitly disabled via build config 00:04:08.558 graph: explicitly disabled via build config 00:04:08.558 node: explicitly disabled via build config 00:04:08.558 00:04:08.558 drivers: 00:04:08.558 common/cpt: not in enabled drivers build config 00:04:08.558 common/dpaax: not in enabled drivers build config 00:04:08.558 common/iavf: not in enabled drivers build config 00:04:08.558 common/idpf: not in enabled drivers build config 00:04:08.558 common/ionic: not in enabled drivers build config 00:04:08.558 common/mvep: not in enabled drivers build config 00:04:08.558 common/octeontx: not in enabled drivers build config 00:04:08.558 bus/auxiliary: not in enabled drivers build config 00:04:08.558 bus/cdx: not in enabled drivers build config 00:04:08.558 bus/dpaa: not in enabled drivers build config 00:04:08.558 bus/fslmc: not in enabled drivers build config 00:04:08.558 bus/ifpga: not in enabled drivers build config 00:04:08.558 bus/platform: not in enabled drivers build config 00:04:08.558 bus/uacce: not in enabled drivers build config 00:04:08.558 bus/vmbus: not in enabled drivers build config 00:04:08.558 common/cnxk: not in enabled drivers build config 00:04:08.558 common/mlx5: not in enabled drivers build config 00:04:08.558 common/nfp: not in enabled drivers build config 00:04:08.558 common/nitrox: not in enabled drivers build config 00:04:08.558 common/qat: not in enabled drivers build config 00:04:08.558 common/sfc_efx: not in enabled drivers build config 00:04:08.558 mempool/bucket: not in enabled drivers build config 00:04:08.558 mempool/cnxk: not in enabled drivers build config 00:04:08.558 mempool/dpaa: not in enabled drivers build config 00:04:08.558 mempool/dpaa2: not in enabled drivers build config 00:04:08.558 mempool/octeontx: not in enabled drivers build config 00:04:08.558 mempool/stack: not in enabled drivers build config 00:04:08.558 dma/cnxk: not in enabled drivers build config 00:04:08.558 dma/dpaa: not in enabled drivers build config 00:04:08.558 dma/dpaa2: not in enabled drivers build config 00:04:08.558 dma/hisilicon: not in enabled drivers build config 00:04:08.558 dma/idxd: not in enabled drivers build config 00:04:08.558 dma/ioat: not in enabled drivers build config 00:04:08.558 dma/skeleton: not in enabled drivers build config 00:04:08.558 net/af_packet: not in enabled drivers build config 00:04:08.558 net/af_xdp: not in enabled drivers build config 00:04:08.558 net/ark: not in enabled drivers build config 00:04:08.558 net/atlantic: not in enabled drivers build config 00:04:08.558 net/avp: not in enabled drivers build config 00:04:08.558 net/axgbe: not in enabled drivers build config 00:04:08.558 net/bnx2x: not in enabled drivers build config 00:04:08.558 net/bnxt: not in enabled drivers build config 00:04:08.558 net/bonding: not in enabled drivers build config 00:04:08.558 net/cnxk: not in enabled drivers build config 00:04:08.558 net/cpfl: not in enabled drivers build config 00:04:08.558 net/cxgbe: not in enabled drivers build config 00:04:08.558 net/dpaa: not in enabled drivers build config 00:04:08.558 net/dpaa2: not in enabled drivers build config 00:04:08.558 net/e1000: not in enabled drivers build config 00:04:08.558 net/ena: not in enabled drivers build config 00:04:08.558 net/enetc: not in enabled drivers build config 00:04:08.558 net/enetfec: not in enabled drivers build config 00:04:08.558 net/enic: not in enabled drivers build config 00:04:08.558 net/failsafe: not in enabled drivers build config 00:04:08.558 net/fm10k: not in enabled drivers build config 00:04:08.558 net/gve: not in enabled drivers build config 00:04:08.558 net/hinic: not in enabled drivers build config 00:04:08.558 net/hns3: not in enabled drivers build config 00:04:08.558 net/i40e: not in enabled drivers build config 00:04:08.558 net/iavf: not in enabled drivers build config 00:04:08.558 net/ice: not in enabled drivers build config 00:04:08.558 net/idpf: not in enabled drivers build config 00:04:08.558 net/igc: not in enabled drivers build config 00:04:08.558 net/ionic: not in enabled drivers build config 00:04:08.558 net/ipn3ke: not in enabled drivers build config 00:04:08.558 net/ixgbe: not in enabled drivers build config 00:04:08.558 net/mana: not in enabled drivers build config 00:04:08.558 net/memif: not in enabled drivers build config 00:04:08.558 net/mlx4: not in enabled drivers build config 00:04:08.558 net/mlx5: not in enabled drivers build config 00:04:08.558 net/mvneta: not in enabled drivers build config 00:04:08.558 net/mvpp2: not in enabled drivers build config 00:04:08.558 net/netvsc: not in enabled drivers build config 00:04:08.558 net/nfb: not in enabled drivers build config 00:04:08.558 net/nfp: not in enabled drivers build config 00:04:08.558 net/ngbe: not in enabled drivers build config 00:04:08.558 net/null: not in enabled drivers build config 00:04:08.558 net/octeontx: not in enabled drivers build config 00:04:08.558 net/octeon_ep: not in enabled drivers build config 00:04:08.558 net/pcap: not in enabled drivers build config 00:04:08.558 net/pfe: not in enabled drivers build config 00:04:08.558 net/qede: not in enabled drivers build config 00:04:08.558 net/ring: not in enabled drivers build config 00:04:08.558 net/sfc: not in enabled drivers build config 00:04:08.558 net/softnic: not in enabled drivers build config 00:04:08.558 net/tap: not in enabled drivers build config 00:04:08.558 net/thunderx: not in enabled drivers build config 00:04:08.558 net/txgbe: not in enabled drivers build config 00:04:08.558 net/vdev_netvsc: not in enabled drivers build config 00:04:08.558 net/vhost: not in enabled drivers build config 00:04:08.558 net/virtio: not in enabled drivers build config 00:04:08.558 net/vmxnet3: not in enabled drivers build config 00:04:08.558 raw/*: missing internal dependency, "rawdev" 00:04:08.558 crypto/armv8: not in enabled drivers build config 00:04:08.558 crypto/bcmfs: not in enabled drivers build config 00:04:08.558 crypto/caam_jr: not in enabled drivers build config 00:04:08.558 crypto/ccp: not in enabled drivers build config 00:04:08.558 crypto/cnxk: not in enabled drivers build config 00:04:08.558 crypto/dpaa_sec: not in enabled drivers build config 00:04:08.558 crypto/dpaa2_sec: not in enabled drivers build config 00:04:08.558 crypto/ipsec_mb: not in enabled drivers build config 00:04:08.558 crypto/mlx5: not in enabled drivers build config 00:04:08.558 crypto/mvsam: not in enabled drivers build config 00:04:08.558 crypto/nitrox: not in enabled drivers build config 00:04:08.558 crypto/null: not in enabled drivers build config 00:04:08.558 crypto/octeontx: not in enabled drivers build config 00:04:08.558 crypto/openssl: not in enabled drivers build config 00:04:08.558 crypto/scheduler: not in enabled drivers build config 00:04:08.558 crypto/uadk: not in enabled drivers build config 00:04:08.558 crypto/virtio: not in enabled drivers build config 00:04:08.558 compress/isal: not in enabled drivers build config 00:04:08.558 compress/mlx5: not in enabled drivers build config 00:04:08.558 compress/nitrox: not in enabled drivers build config 00:04:08.558 compress/octeontx: not in enabled drivers build config 00:04:08.558 compress/zlib: not in enabled drivers build config 00:04:08.558 regex/*: missing internal dependency, "regexdev" 00:04:08.558 ml/*: missing internal dependency, "mldev" 00:04:08.558 vdpa/ifc: not in enabled drivers build config 00:04:08.558 vdpa/mlx5: not in enabled drivers build config 00:04:08.558 vdpa/nfp: not in enabled drivers build config 00:04:08.558 vdpa/sfc: not in enabled drivers build config 00:04:08.558 event/*: missing internal dependency, "eventdev" 00:04:08.558 baseband/*: missing internal dependency, "bbdev" 00:04:08.558 gpu/*: missing internal dependency, "gpudev" 00:04:08.558 00:04:08.558 00:04:08.558 Build targets in project: 85 00:04:08.558 00:04:08.558 DPDK 24.03.0 00:04:08.558 00:04:08.558 User defined options 00:04:08.559 buildtype : debug 00:04:08.559 default_library : static 00:04:08.559 libdir : lib 00:04:08.559 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:04:08.559 c_args : -fPIC -Werror 00:04:08.559 c_link_args : 00:04:08.559 cpu_instruction_set: native 00:04:08.559 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:08.559 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:08.559 enable_docs : false 00:04:08.559 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:08.559 enable_kmods : false 00:04:08.559 max_lcores : 128 00:04:08.559 tests : false 00:04:08.559 00:04:08.559 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:08.559 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:04:08.559 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:08.559 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:08.559 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:08.559 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:08.559 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:08.559 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:08.559 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:08.559 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:08.559 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:08.559 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:08.559 [11/268] Linking static target lib/librte_kvargs.a 00:04:08.559 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:08.559 [13/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:08.559 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:08.559 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:08.559 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:08.559 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:08.559 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:08.819 [19/268] Linking static target lib/librte_log.a 00:04:09.078 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:09.078 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:09.078 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:09.078 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:09.078 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:09.078 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:09.078 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:09.078 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:09.078 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:09.078 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:09.078 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:09.078 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:09.078 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:09.078 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:09.078 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:09.078 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:09.078 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:09.078 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:09.078 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:09.078 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:09.078 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:09.078 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:09.078 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:09.078 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:09.078 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:09.340 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:09.340 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:09.340 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:09.340 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:09.340 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:09.340 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:09.340 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:09.340 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:09.340 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:09.340 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:09.340 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:09.340 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:09.340 [57/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:09.340 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:09.340 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:09.340 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:09.340 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:09.340 [62/268] Linking static target lib/librte_telemetry.a 00:04:09.340 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:09.340 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:09.340 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:09.340 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:09.340 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:09.340 [68/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:09.340 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:09.340 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:09.340 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:09.340 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:09.340 [73/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:09.340 [74/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:09.340 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:09.340 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:09.340 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:09.340 [78/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.340 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:09.340 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:09.340 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:09.340 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:09.340 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:09.340 [84/268] Linking static target lib/librte_ring.a 00:04:09.340 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:09.340 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:09.340 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:09.340 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:09.340 [89/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:09.340 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:09.340 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:09.340 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:09.340 [93/268] Linking static target lib/librte_pci.a 00:04:09.340 [94/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:09.340 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:09.340 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:09.340 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:09.340 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:09.340 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:09.340 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:09.340 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:09.340 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:09.340 [103/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:09.340 [104/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:09.340 [105/268] Linking static target lib/librte_eal.a 00:04:09.340 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:09.340 [107/268] Linking static target lib/librte_rcu.a 00:04:09.340 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:09.601 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:09.601 [110/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:09.601 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:09.601 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:09.601 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:09.601 [114/268] Linking static target lib/librte_mempool.a 00:04:09.601 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:09.601 [116/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.601 [117/268] Linking static target lib/librte_mbuf.a 00:04:09.601 [118/268] Linking target lib/librte_log.so.24.1 00:04:09.601 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.860 [120/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:09.860 [121/268] Linking static target lib/librte_net.a 00:04:09.860 [122/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:09.860 [123/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:09.860 [124/268] Linking static target lib/librte_meter.a 00:04:09.860 [125/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.860 [126/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:09.860 [127/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:09.860 [128/268] Linking static target lib/librte_timer.a 00:04:09.860 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:09.860 [130/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:09.860 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:09.860 [132/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:09.860 [133/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:09.860 [134/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.860 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:09.860 [136/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:09.860 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:09.860 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:09.860 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:09.860 [140/268] Linking static target lib/librte_cmdline.a 00:04:09.860 [141/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:09.860 [142/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:09.860 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:09.860 [144/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:09.860 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:09.860 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:09.860 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:09.860 [148/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:09.860 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:09.860 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:09.860 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:09.860 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:09.860 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:09.860 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:09.860 [155/268] Linking static target lib/librte_dmadev.a 00:04:09.860 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:09.860 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:09.860 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:09.860 [159/268] Linking target lib/librte_kvargs.so.24.1 00:04:09.860 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:09.860 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:09.860 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:09.860 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:09.860 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:09.860 [165/268] Linking target lib/librte_telemetry.so.24.1 00:04:10.121 [166/268] Linking static target lib/librte_compressdev.a 00:04:10.121 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:10.121 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:10.121 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:10.121 [170/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:10.121 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:10.121 [172/268] Linking static target lib/librte_security.a 00:04:10.121 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:10.121 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:10.121 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:10.121 [176/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:10.121 [177/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.121 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:10.121 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:10.121 [180/268] Linking static target lib/librte_reorder.a 00:04:10.121 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:10.121 [182/268] Linking static target lib/librte_power.a 00:04:10.121 [183/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.121 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:10.121 [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:10.121 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:10.121 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:10.121 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:10.121 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:10.121 [190/268] Linking static target lib/librte_hash.a 00:04:10.121 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:10.121 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:10.121 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:10.121 [194/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:10.121 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:10.121 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:10.380 [197/268] Linking static target lib/librte_cryptodev.a 00:04:10.380 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:10.380 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:10.380 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:10.380 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:10.380 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:10.380 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.380 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:10.380 [205/268] Linking static target drivers/librte_bus_vdev.a 00:04:10.380 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:10.380 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:10.380 [208/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.380 [209/268] Linking static target drivers/librte_bus_pci.a 00:04:10.380 [210/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:10.380 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.380 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:10.380 [213/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:10.639 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.639 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:10.639 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:10.639 [217/268] Linking static target drivers/librte_mempool_ring.a 00:04:10.639 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:10.639 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.639 [220/268] Linking static target lib/librte_ethdev.a 00:04:10.639 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.639 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.639 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.207 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.207 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:11.207 [226/268] Linking static target lib/librte_vhost.a 00:04:11.207 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.207 [228/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.207 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.582 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.146 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.263 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.263 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.263 [234/268] Linking target lib/librte_eal.so.24.1 00:04:21.521 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:21.521 [236/268] Linking target lib/librte_meter.so.24.1 00:04:21.521 [237/268] Linking target lib/librte_timer.so.24.1 00:04:21.521 [238/268] Linking target lib/librte_pci.so.24.1 00:04:21.521 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:21.521 [240/268] Linking target lib/librte_dmadev.so.24.1 00:04:21.521 [241/268] Linking target lib/librte_ring.so.24.1 00:04:21.780 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:21.780 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:21.780 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:21.780 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:21.780 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:21.780 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:21.780 [248/268] Linking target lib/librte_rcu.so.24.1 00:04:21.780 [249/268] Linking target lib/librte_mempool.so.24.1 00:04:21.780 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:22.038 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:22.038 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:22.038 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:22.038 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:22.038 [255/268] Linking target lib/librte_compressdev.so.24.1 00:04:22.038 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:04:22.297 [257/268] Linking target lib/librte_net.so.24.1 00:04:22.297 [258/268] Linking target lib/librte_reorder.so.24.1 00:04:22.297 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:22.297 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:22.297 [261/268] Linking target lib/librte_security.so.24.1 00:04:22.297 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:22.297 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:22.297 [264/268] Linking target lib/librte_hash.so.24.1 00:04:22.556 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:22.556 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:22.556 [267/268] Linking target lib/librte_vhost.so.24.1 00:04:22.556 [268/268] Linking target lib/librte_power.so.24.1 00:04:22.556 INFO: autodetecting backend as ninja 00:04:22.556 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:04:23.935 CC lib/log/log.o 00:04:23.935 CC lib/log/log_flags.o 00:04:23.935 CC lib/ut_mock/mock.o 00:04:23.935 CC lib/log/log_deprecated.o 00:04:23.935 CC lib/ut/ut.o 00:04:23.935 LIB libspdk_ut_mock.a 00:04:23.935 LIB libspdk_log.a 00:04:23.935 LIB libspdk_ut.a 00:04:23.935 CC lib/util/base64.o 00:04:23.935 CC lib/util/cpuset.o 00:04:23.935 CC lib/util/crc16.o 00:04:23.935 CC lib/util/bit_array.o 00:04:23.935 CC lib/util/crc32.o 00:04:23.935 CC lib/util/crc32c.o 00:04:23.935 CC lib/util/crc32_ieee.o 00:04:23.935 CC lib/util/dif.o 00:04:23.935 CC lib/util/crc64.o 00:04:23.935 CC lib/util/fd.o 00:04:23.935 CC lib/util/file.o 00:04:23.935 CC lib/util/fd_group.o 00:04:23.935 CC lib/util/hexlify.o 00:04:23.935 CXX lib/trace_parser/trace.o 00:04:23.935 CC lib/dma/dma.o 00:04:23.935 CC lib/util/math.o 00:04:23.935 CC lib/util/net.o 00:04:23.935 CC lib/util/iov.o 00:04:23.935 CC lib/util/pipe.o 00:04:23.935 CC lib/util/strerror_tls.o 00:04:23.935 CC lib/util/string.o 00:04:23.935 CC lib/util/uuid.o 00:04:23.935 CC lib/util/xor.o 00:04:23.935 CC lib/util/zipf.o 00:04:23.935 CC lib/util/md5.o 00:04:24.194 CC lib/ioat/ioat.o 00:04:24.194 CC lib/vfio_user/host/vfio_user_pci.o 00:04:24.194 CC lib/vfio_user/host/vfio_user.o 00:04:24.194 LIB libspdk_dma.a 00:04:24.194 LIB libspdk_ioat.a 00:04:24.452 LIB libspdk_vfio_user.a 00:04:24.452 LIB libspdk_util.a 00:04:24.452 LIB libspdk_trace_parser.a 00:04:24.710 CC lib/json/json_parse.o 00:04:24.710 CC lib/json/json_util.o 00:04:24.710 CC lib/json/json_write.o 00:04:24.710 CC lib/vmd/led.o 00:04:24.710 CC lib/vmd/vmd.o 00:04:24.710 CC lib/idxd/idxd_kernel.o 00:04:24.710 CC lib/idxd/idxd.o 00:04:24.710 CC lib/idxd/idxd_user.o 00:04:24.710 CC lib/conf/conf.o 00:04:24.710 CC lib/rdma_utils/rdma_utils.o 00:04:24.710 CC lib/env_dpdk/env.o 00:04:24.710 CC lib/env_dpdk/pci.o 00:04:24.710 CC lib/env_dpdk/memory.o 00:04:24.710 CC lib/env_dpdk/init.o 00:04:24.710 CC lib/env_dpdk/threads.o 00:04:24.710 CC lib/env_dpdk/pci_ioat.o 00:04:24.710 CC lib/env_dpdk/pci_virtio.o 00:04:24.710 CC lib/env_dpdk/pci_vmd.o 00:04:24.710 CC lib/env_dpdk/pci_dpdk.o 00:04:24.710 CC lib/env_dpdk/pci_idxd.o 00:04:24.710 CC lib/env_dpdk/pci_event.o 00:04:24.710 CC lib/env_dpdk/sigbus_handler.o 00:04:24.710 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:24.710 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:24.969 LIB libspdk_conf.a 00:04:24.969 LIB libspdk_json.a 00:04:24.969 LIB libspdk_rdma_utils.a 00:04:24.969 LIB libspdk_idxd.a 00:04:24.969 LIB libspdk_vmd.a 00:04:25.228 CC lib/jsonrpc/jsonrpc_server.o 00:04:25.228 CC lib/jsonrpc/jsonrpc_client.o 00:04:25.228 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:25.228 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:25.228 CC lib/rdma_provider/common.o 00:04:25.228 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:25.228 LIB libspdk_jsonrpc.a 00:04:25.228 LIB libspdk_rdma_provider.a 00:04:25.486 CC lib/rpc/rpc.o 00:04:25.745 LIB libspdk_env_dpdk.a 00:04:25.745 LIB libspdk_rpc.a 00:04:26.003 CC lib/keyring/keyring.o 00:04:26.003 CC lib/keyring/keyring_rpc.o 00:04:26.003 CC lib/notify/notify.o 00:04:26.003 CC lib/notify/notify_rpc.o 00:04:26.003 CC lib/trace/trace.o 00:04:26.003 CC lib/trace/trace_flags.o 00:04:26.003 CC lib/trace/trace_rpc.o 00:04:26.262 LIB libspdk_notify.a 00:04:26.262 LIB libspdk_keyring.a 00:04:26.262 LIB libspdk_trace.a 00:04:26.521 CC lib/thread/thread.o 00:04:26.521 CC lib/thread/iobuf.o 00:04:26.521 CC lib/sock/sock.o 00:04:26.521 CC lib/sock/sock_rpc.o 00:04:26.780 LIB libspdk_sock.a 00:04:27.038 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:27.038 CC lib/nvme/nvme_ctrlr.o 00:04:27.038 CC lib/nvme/nvme_ns_cmd.o 00:04:27.038 CC lib/nvme/nvme_fabric.o 00:04:27.038 CC lib/nvme/nvme_pcie.o 00:04:27.038 CC lib/nvme/nvme_ns.o 00:04:27.038 CC lib/nvme/nvme_pcie_common.o 00:04:27.038 CC lib/nvme/nvme_quirks.o 00:04:27.038 CC lib/nvme/nvme_qpair.o 00:04:27.038 CC lib/nvme/nvme.o 00:04:27.038 CC lib/nvme/nvme_transport.o 00:04:27.038 CC lib/nvme/nvme_discovery.o 00:04:27.038 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:27.038 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:27.038 CC lib/nvme/nvme_tcp.o 00:04:27.038 CC lib/nvme/nvme_opal.o 00:04:27.038 CC lib/nvme/nvme_io_msg.o 00:04:27.038 CC lib/nvme/nvme_poll_group.o 00:04:27.038 CC lib/nvme/nvme_zns.o 00:04:27.038 CC lib/nvme/nvme_stubs.o 00:04:27.038 CC lib/nvme/nvme_auth.o 00:04:27.038 CC lib/nvme/nvme_cuse.o 00:04:27.038 CC lib/nvme/nvme_vfio_user.o 00:04:27.038 CC lib/nvme/nvme_rdma.o 00:04:27.297 LIB libspdk_thread.a 00:04:27.556 CC lib/vfu_tgt/tgt_endpoint.o 00:04:27.556 CC lib/vfu_tgt/tgt_rpc.o 00:04:27.556 CC lib/blob/zeroes.o 00:04:27.556 CC lib/blob/blobstore.o 00:04:27.556 CC lib/blob/request.o 00:04:27.556 CC lib/blob/blob_bs_dev.o 00:04:27.556 CC lib/accel/accel.o 00:04:27.556 CC lib/accel/accel_rpc.o 00:04:27.556 CC lib/accel/accel_sw.o 00:04:27.556 CC lib/init/json_config.o 00:04:27.556 CC lib/init/subsystem.o 00:04:27.556 CC lib/init/subsystem_rpc.o 00:04:27.556 CC lib/init/rpc.o 00:04:27.556 CC lib/virtio/virtio.o 00:04:27.556 CC lib/fsdev/fsdev_io.o 00:04:27.556 CC lib/fsdev/fsdev_rpc.o 00:04:27.556 CC lib/virtio/virtio_vfio_user.o 00:04:27.556 CC lib/virtio/virtio_vhost_user.o 00:04:27.556 CC lib/virtio/virtio_pci.o 00:04:27.556 CC lib/fsdev/fsdev.o 00:04:27.814 LIB libspdk_init.a 00:04:27.814 LIB libspdk_vfu_tgt.a 00:04:27.814 LIB libspdk_virtio.a 00:04:28.073 LIB libspdk_fsdev.a 00:04:28.073 CC lib/event/reactor.o 00:04:28.073 CC lib/event/log_rpc.o 00:04:28.073 CC lib/event/app.o 00:04:28.073 CC lib/event/app_rpc.o 00:04:28.073 CC lib/event/scheduler_static.o 00:04:28.331 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:28.331 LIB libspdk_event.a 00:04:28.331 LIB libspdk_accel.a 00:04:28.589 LIB libspdk_nvme.a 00:04:28.589 CC lib/bdev/bdev_rpc.o 00:04:28.589 CC lib/bdev/bdev.o 00:04:28.589 CC lib/bdev/part.o 00:04:28.589 CC lib/bdev/bdev_zone.o 00:04:28.589 CC lib/bdev/scsi_nvme.o 00:04:28.589 LIB libspdk_fuse_dispatcher.a 00:04:29.524 LIB libspdk_blob.a 00:04:29.524 CC lib/blobfs/blobfs.o 00:04:29.524 CC lib/lvol/lvol.o 00:04:29.524 CC lib/blobfs/tree.o 00:04:30.091 LIB libspdk_lvol.a 00:04:30.091 LIB libspdk_blobfs.a 00:04:30.349 LIB libspdk_bdev.a 00:04:30.916 CC lib/nbd/nbd.o 00:04:30.916 CC lib/nbd/nbd_rpc.o 00:04:30.916 CC lib/nvmf/ctrlr.o 00:04:30.916 CC lib/nvmf/ctrlr_discovery.o 00:04:30.916 CC lib/nvmf/ctrlr_bdev.o 00:04:30.916 CC lib/nvmf/nvmf_rpc.o 00:04:30.916 CC lib/nvmf/subsystem.o 00:04:30.916 CC lib/nvmf/nvmf.o 00:04:30.916 CC lib/nvmf/transport.o 00:04:30.916 CC lib/nvmf/tcp.o 00:04:30.916 CC lib/nvmf/stubs.o 00:04:30.916 CC lib/nvmf/mdns_server.o 00:04:30.916 CC lib/nvmf/auth.o 00:04:30.916 CC lib/nvmf/vfio_user.o 00:04:30.916 CC lib/nvmf/rdma.o 00:04:30.916 CC lib/ublk/ublk.o 00:04:30.916 CC lib/ublk/ublk_rpc.o 00:04:30.916 CC lib/scsi/lun.o 00:04:30.916 CC lib/scsi/dev.o 00:04:30.916 CC lib/scsi/scsi_bdev.o 00:04:30.917 CC lib/scsi/port.o 00:04:30.917 CC lib/scsi/scsi.o 00:04:30.917 CC lib/ftl/ftl_core.o 00:04:30.917 CC lib/scsi/scsi_pr.o 00:04:30.917 CC lib/scsi/scsi_rpc.o 00:04:30.917 CC lib/ftl/ftl_init.o 00:04:30.917 CC lib/scsi/task.o 00:04:30.917 CC lib/ftl/ftl_layout.o 00:04:30.917 CC lib/ftl/ftl_debug.o 00:04:30.917 CC lib/ftl/ftl_io.o 00:04:30.917 CC lib/ftl/ftl_sb.o 00:04:30.917 CC lib/ftl/ftl_l2p.o 00:04:30.917 CC lib/ftl/ftl_l2p_flat.o 00:04:30.917 CC lib/ftl/ftl_nv_cache.o 00:04:30.917 CC lib/ftl/ftl_band.o 00:04:30.917 CC lib/ftl/ftl_writer.o 00:04:30.917 CC lib/ftl/ftl_band_ops.o 00:04:30.917 CC lib/ftl/ftl_reloc.o 00:04:30.917 CC lib/ftl/ftl_rq.o 00:04:30.917 CC lib/ftl/ftl_l2p_cache.o 00:04:30.917 CC lib/ftl/ftl_p2l.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt.o 00:04:30.917 CC lib/ftl/ftl_p2l_log.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:30.917 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:30.917 CC lib/ftl/utils/ftl_conf.o 00:04:30.917 CC lib/ftl/utils/ftl_md.o 00:04:30.917 CC lib/ftl/utils/ftl_mempool.o 00:04:30.917 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.917 CC lib/ftl/utils/ftl_property.o 00:04:30.917 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:30.917 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:30.917 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:30.917 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:30.917 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:30.917 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:30.917 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:30.917 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:30.917 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:30.917 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:30.917 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:30.917 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:30.917 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:30.917 CC lib/ftl/base/ftl_base_dev.o 00:04:30.917 CC lib/ftl/base/ftl_base_bdev.o 00:04:30.917 CC lib/ftl/ftl_trace.o 00:04:31.176 LIB libspdk_nbd.a 00:04:31.176 LIB libspdk_scsi.a 00:04:31.436 LIB libspdk_ublk.a 00:04:31.436 CC lib/iscsi/conn.o 00:04:31.436 CC lib/iscsi/init_grp.o 00:04:31.695 CC lib/iscsi/iscsi.o 00:04:31.695 CC lib/iscsi/param.o 00:04:31.696 CC lib/iscsi/portal_grp.o 00:04:31.696 CC lib/iscsi/tgt_node.o 00:04:31.696 CC lib/iscsi/iscsi_subsystem.o 00:04:31.696 CC lib/iscsi/iscsi_rpc.o 00:04:31.696 CC lib/iscsi/task.o 00:04:31.696 LIB libspdk_ftl.a 00:04:31.696 CC lib/vhost/vhost.o 00:04:31.696 CC lib/vhost/vhost_blk.o 00:04:31.696 CC lib/vhost/vhost_rpc.o 00:04:31.696 CC lib/vhost/vhost_scsi.o 00:04:31.696 CC lib/vhost/rte_vhost_user.o 00:04:32.263 LIB libspdk_nvmf.a 00:04:32.263 LIB libspdk_vhost.a 00:04:32.263 LIB libspdk_iscsi.a 00:04:32.829 CC module/env_dpdk/env_dpdk_rpc.o 00:04:32.829 CC module/vfu_device/vfu_virtio.o 00:04:32.829 CC module/vfu_device/vfu_virtio_blk.o 00:04:32.829 CC module/vfu_device/vfu_virtio_fs.o 00:04:32.829 CC module/vfu_device/vfu_virtio_scsi.o 00:04:32.829 CC module/vfu_device/vfu_virtio_rpc.o 00:04:32.829 CC module/blob/bdev/blob_bdev.o 00:04:32.829 LIB libspdk_env_dpdk_rpc.a 00:04:32.829 CC module/scheduler/gscheduler/gscheduler.o 00:04:32.829 CC module/accel/ioat/accel_ioat.o 00:04:32.829 CC module/fsdev/aio/fsdev_aio.o 00:04:32.829 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.829 CC module/accel/dsa/accel_dsa.o 00:04:32.829 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:32.829 CC module/fsdev/aio/linux_aio_mgr.o 00:04:32.829 CC module/accel/dsa/accel_dsa_rpc.o 00:04:32.829 CC module/accel/iaa/accel_iaa.o 00:04:32.829 CC module/accel/iaa/accel_iaa_rpc.o 00:04:32.829 CC module/keyring/linux/keyring.o 00:04:32.829 CC module/keyring/linux/keyring_rpc.o 00:04:32.829 CC module/sock/posix/posix.o 00:04:32.829 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.829 CC module/accel/error/accel_error.o 00:04:32.829 CC module/accel/error/accel_error_rpc.o 00:04:32.829 CC module/keyring/file/keyring_rpc.o 00:04:32.829 CC module/keyring/file/keyring.o 00:04:32.829 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:33.086 LIB libspdk_scheduler_gscheduler.a 00:04:33.086 LIB libspdk_keyring_linux.a 00:04:33.086 LIB libspdk_accel_ioat.a 00:04:33.086 LIB libspdk_keyring_file.a 00:04:33.086 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.086 LIB libspdk_scheduler_dynamic.a 00:04:33.086 LIB libspdk_accel_error.a 00:04:33.086 LIB libspdk_accel_iaa.a 00:04:33.086 LIB libspdk_blob_bdev.a 00:04:33.086 LIB libspdk_accel_dsa.a 00:04:33.343 LIB libspdk_vfu_device.a 00:04:33.343 LIB libspdk_fsdev_aio.a 00:04:33.343 LIB libspdk_sock_posix.a 00:04:33.343 CC module/bdev/malloc/bdev_malloc.o 00:04:33.343 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:33.343 CC module/bdev/aio/bdev_aio.o 00:04:33.343 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.343 CC module/bdev/aio/bdev_aio_rpc.o 00:04:33.343 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.343 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.343 CC module/bdev/delay/vbdev_delay.o 00:04:33.343 CC module/bdev/null/bdev_null_rpc.o 00:04:33.343 CC module/bdev/null/bdev_null.o 00:04:33.343 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:33.343 CC module/bdev/error/vbdev_error.o 00:04:33.343 CC module/bdev/raid/bdev_raid.o 00:04:33.343 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.343 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:33.343 CC module/bdev/raid/bdev_raid_sb.o 00:04:33.343 CC module/bdev/raid/raid1.o 00:04:33.343 CC module/bdev/raid/raid0.o 00:04:33.343 CC module/bdev/raid/concat.o 00:04:33.343 CC module/bdev/raid/bdev_raid_rpc.o 00:04:33.343 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.343 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.343 CC module/bdev/split/vbdev_split.o 00:04:33.343 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.343 CC module/bdev/ftl/bdev_ftl.o 00:04:33.343 CC module/bdev/split/vbdev_split_rpc.o 00:04:33.343 CC module/bdev/iscsi/bdev_iscsi.o 00:04:33.343 CC module/bdev/gpt/gpt.o 00:04:33.343 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:33.343 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:33.343 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.343 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.343 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:33.343 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:33.343 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:33.343 CC module/bdev/nvme/bdev_nvme.o 00:04:33.343 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.343 CC module/bdev/nvme/nvme_rpc.o 00:04:33.343 CC module/bdev/nvme/bdev_mdns_client.o 00:04:33.343 CC module/bdev/nvme/vbdev_opal.o 00:04:33.343 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:33.343 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:33.600 LIB libspdk_bdev_null.a 00:04:33.600 LIB libspdk_bdev_split.a 00:04:33.600 LIB libspdk_bdev_error.a 00:04:33.600 LIB libspdk_bdev_passthru.a 00:04:33.600 LIB libspdk_bdev_gpt.a 00:04:33.600 LIB libspdk_bdev_aio.a 00:04:33.600 LIB libspdk_blobfs_bdev.a 00:04:33.600 LIB libspdk_bdev_delay.a 00:04:33.600 LIB libspdk_bdev_malloc.a 00:04:33.600 LIB libspdk_bdev_iscsi.a 00:04:33.600 LIB libspdk_bdev_ftl.a 00:04:33.858 LIB libspdk_bdev_zone_block.a 00:04:33.858 LIB libspdk_bdev_virtio.a 00:04:33.858 LIB libspdk_bdev_lvol.a 00:04:34.115 LIB libspdk_bdev_raid.a 00:04:35.106 LIB libspdk_bdev_nvme.a 00:04:35.673 CC module/event/subsystems/scheduler/scheduler.o 00:04:35.673 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:35.673 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:35.673 CC module/event/subsystems/fsdev/fsdev.o 00:04:35.673 CC module/event/subsystems/sock/sock.o 00:04:35.673 CC module/event/subsystems/vmd/vmd.o 00:04:35.673 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:35.673 CC module/event/subsystems/iobuf/iobuf.o 00:04:35.673 CC module/event/subsystems/keyring/keyring.o 00:04:35.673 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:35.673 LIB libspdk_event_scheduler.a 00:04:35.673 LIB libspdk_event_vfu_tgt.a 00:04:35.673 LIB libspdk_event_vhost_blk.a 00:04:35.673 LIB libspdk_event_fsdev.a 00:04:35.673 LIB libspdk_event_sock.a 00:04:35.673 LIB libspdk_event_vmd.a 00:04:35.673 LIB libspdk_event_keyring.a 00:04:35.673 LIB libspdk_event_iobuf.a 00:04:35.930 CC module/event/subsystems/accel/accel.o 00:04:36.189 LIB libspdk_event_accel.a 00:04:36.449 CC module/event/subsystems/bdev/bdev.o 00:04:36.449 LIB libspdk_event_bdev.a 00:04:37.016 CC module/event/subsystems/scsi/scsi.o 00:04:37.016 CC module/event/subsystems/ublk/ublk.o 00:04:37.016 CC module/event/subsystems/nbd/nbd.o 00:04:37.016 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:37.016 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:37.016 LIB libspdk_event_ublk.a 00:04:37.016 LIB libspdk_event_scsi.a 00:04:37.016 LIB libspdk_event_nbd.a 00:04:37.016 LIB libspdk_event_nvmf.a 00:04:37.275 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:37.275 CC module/event/subsystems/iscsi/iscsi.o 00:04:37.275 LIB libspdk_event_vhost_scsi.a 00:04:37.275 LIB libspdk_event_iscsi.a 00:04:37.535 CC app/trace_record/trace_record.o 00:04:37.802 CC app/spdk_top/spdk_top.o 00:04:37.802 CXX app/trace/trace.o 00:04:37.802 CC app/spdk_nvme_identify/identify.o 00:04:37.802 CC app/spdk_nvme_discover/discovery_aer.o 00:04:37.802 CC app/spdk_nvme_perf/perf.o 00:04:37.802 CC test/rpc_client/rpc_client_test.o 00:04:37.802 CC app/spdk_lspci/spdk_lspci.o 00:04:37.802 TEST_HEADER include/spdk/barrier.h 00:04:37.802 TEST_HEADER include/spdk/assert.h 00:04:37.802 TEST_HEADER include/spdk/accel.h 00:04:37.802 TEST_HEADER include/spdk/accel_module.h 00:04:37.802 TEST_HEADER include/spdk/base64.h 00:04:37.802 TEST_HEADER include/spdk/bit_array.h 00:04:37.803 TEST_HEADER include/spdk/bdev.h 00:04:37.803 TEST_HEADER include/spdk/bdev_module.h 00:04:37.803 TEST_HEADER include/spdk/bdev_zone.h 00:04:37.803 TEST_HEADER include/spdk/bit_pool.h 00:04:37.803 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:37.803 TEST_HEADER include/spdk/blob.h 00:04:37.803 TEST_HEADER include/spdk/blobfs.h 00:04:37.803 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:37.803 TEST_HEADER include/spdk/blob_bdev.h 00:04:37.803 TEST_HEADER include/spdk/config.h 00:04:37.803 TEST_HEADER include/spdk/conf.h 00:04:37.803 TEST_HEADER include/spdk/cpuset.h 00:04:37.803 TEST_HEADER include/spdk/crc32.h 00:04:37.803 TEST_HEADER include/spdk/crc16.h 00:04:37.803 TEST_HEADER include/spdk/dif.h 00:04:37.803 TEST_HEADER include/spdk/endian.h 00:04:37.803 TEST_HEADER include/spdk/env_dpdk.h 00:04:37.803 TEST_HEADER include/spdk/crc64.h 00:04:37.803 TEST_HEADER include/spdk/dma.h 00:04:37.803 TEST_HEADER include/spdk/env.h 00:04:37.803 TEST_HEADER include/spdk/event.h 00:04:37.803 TEST_HEADER include/spdk/fd_group.h 00:04:37.803 TEST_HEADER include/spdk/fd.h 00:04:37.803 TEST_HEADER include/spdk/fsdev.h 00:04:37.803 TEST_HEADER include/spdk/file.h 00:04:37.803 TEST_HEADER include/spdk/fsdev_module.h 00:04:37.803 TEST_HEADER include/spdk/ftl.h 00:04:37.803 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:37.803 TEST_HEADER include/spdk/gpt_spec.h 00:04:37.803 TEST_HEADER include/spdk/hexlify.h 00:04:37.803 TEST_HEADER include/spdk/histogram_data.h 00:04:37.803 TEST_HEADER include/spdk/idxd.h 00:04:37.803 TEST_HEADER include/spdk/idxd_spec.h 00:04:37.803 TEST_HEADER include/spdk/init.h 00:04:37.803 TEST_HEADER include/spdk/ioat_spec.h 00:04:37.803 TEST_HEADER include/spdk/ioat.h 00:04:37.803 TEST_HEADER include/spdk/iscsi_spec.h 00:04:37.803 TEST_HEADER include/spdk/json.h 00:04:37.803 TEST_HEADER include/spdk/keyring.h 00:04:37.803 TEST_HEADER include/spdk/keyring_module.h 00:04:37.803 TEST_HEADER include/spdk/jsonrpc.h 00:04:37.803 TEST_HEADER include/spdk/likely.h 00:04:37.803 TEST_HEADER include/spdk/log.h 00:04:37.803 TEST_HEADER include/spdk/md5.h 00:04:37.803 TEST_HEADER include/spdk/lvol.h 00:04:37.803 TEST_HEADER include/spdk/memory.h 00:04:37.803 TEST_HEADER include/spdk/nbd.h 00:04:37.803 TEST_HEADER include/spdk/mmio.h 00:04:37.803 TEST_HEADER include/spdk/net.h 00:04:37.803 TEST_HEADER include/spdk/notify.h 00:04:37.803 TEST_HEADER include/spdk/nvme.h 00:04:37.803 TEST_HEADER include/spdk/nvme_intel.h 00:04:37.803 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:37.803 TEST_HEADER include/spdk/nvme_spec.h 00:04:37.803 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:37.803 TEST_HEADER include/spdk/nvme_zns.h 00:04:37.803 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:37.803 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:37.803 TEST_HEADER include/spdk/nvmf.h 00:04:37.803 TEST_HEADER include/spdk/nvmf_spec.h 00:04:37.803 TEST_HEADER include/spdk/nvmf_transport.h 00:04:37.803 TEST_HEADER include/spdk/opal.h 00:04:37.803 TEST_HEADER include/spdk/opal_spec.h 00:04:37.803 TEST_HEADER include/spdk/pci_ids.h 00:04:37.803 TEST_HEADER include/spdk/pipe.h 00:04:37.803 TEST_HEADER include/spdk/queue.h 00:04:37.803 TEST_HEADER include/spdk/reduce.h 00:04:37.803 TEST_HEADER include/spdk/rpc.h 00:04:37.803 TEST_HEADER include/spdk/scheduler.h 00:04:37.803 TEST_HEADER include/spdk/scsi.h 00:04:37.803 TEST_HEADER include/spdk/scsi_spec.h 00:04:37.803 TEST_HEADER include/spdk/sock.h 00:04:37.803 CC app/nvmf_tgt/nvmf_main.o 00:04:37.803 CC app/iscsi_tgt/iscsi_tgt.o 00:04:37.803 TEST_HEADER include/spdk/stdinc.h 00:04:37.803 TEST_HEADER include/spdk/string.h 00:04:37.803 TEST_HEADER include/spdk/thread.h 00:04:37.803 TEST_HEADER include/spdk/trace.h 00:04:37.803 TEST_HEADER include/spdk/trace_parser.h 00:04:37.803 CC app/spdk_dd/spdk_dd.o 00:04:37.803 TEST_HEADER include/spdk/tree.h 00:04:37.803 TEST_HEADER include/spdk/util.h 00:04:37.803 TEST_HEADER include/spdk/ublk.h 00:04:37.803 TEST_HEADER include/spdk/uuid.h 00:04:37.803 TEST_HEADER include/spdk/version.h 00:04:37.803 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:37.803 TEST_HEADER include/spdk/vhost.h 00:04:37.803 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:37.803 TEST_HEADER include/spdk/vmd.h 00:04:37.803 TEST_HEADER include/spdk/xor.h 00:04:37.803 TEST_HEADER include/spdk/zipf.h 00:04:37.803 CXX test/cpp_headers/accel.o 00:04:37.803 CXX test/cpp_headers/accel_module.o 00:04:37.803 CXX test/cpp_headers/assert.o 00:04:37.803 CXX test/cpp_headers/barrier.o 00:04:37.803 CXX test/cpp_headers/base64.o 00:04:37.803 CXX test/cpp_headers/bdev.o 00:04:37.803 CXX test/cpp_headers/bdev_module.o 00:04:37.803 CC examples/ioat/verify/verify.o 00:04:37.803 CXX test/cpp_headers/bit_array.o 00:04:37.803 CXX test/cpp_headers/bdev_zone.o 00:04:37.803 CC app/spdk_tgt/spdk_tgt.o 00:04:37.803 CXX test/cpp_headers/bit_pool.o 00:04:37.803 CXX test/cpp_headers/blobfs_bdev.o 00:04:37.803 CXX test/cpp_headers/blob_bdev.o 00:04:37.803 CXX test/cpp_headers/blobfs.o 00:04:37.803 CXX test/cpp_headers/blob.o 00:04:37.803 CC examples/ioat/perf/perf.o 00:04:37.803 CXX test/cpp_headers/conf.o 00:04:37.803 CXX test/cpp_headers/config.o 00:04:37.803 CXX test/cpp_headers/cpuset.o 00:04:37.803 CXX test/cpp_headers/crc16.o 00:04:37.803 CXX test/cpp_headers/crc32.o 00:04:37.803 CXX test/cpp_headers/crc64.o 00:04:37.803 CXX test/cpp_headers/dif.o 00:04:37.803 CXX test/cpp_headers/dma.o 00:04:37.803 CXX test/cpp_headers/endian.o 00:04:37.803 CXX test/cpp_headers/env_dpdk.o 00:04:37.803 CXX test/cpp_headers/env.o 00:04:37.803 CXX test/cpp_headers/event.o 00:04:37.803 CXX test/cpp_headers/fd_group.o 00:04:37.803 CXX test/cpp_headers/fd.o 00:04:37.803 CXX test/cpp_headers/file.o 00:04:37.803 CXX test/cpp_headers/fsdev.o 00:04:37.803 CXX test/cpp_headers/fsdev_module.o 00:04:37.803 CXX test/cpp_headers/ftl.o 00:04:37.803 CXX test/cpp_headers/fuse_dispatcher.o 00:04:37.803 CXX test/cpp_headers/gpt_spec.o 00:04:37.803 CXX test/cpp_headers/hexlify.o 00:04:37.803 CXX test/cpp_headers/histogram_data.o 00:04:37.803 CXX test/cpp_headers/idxd.o 00:04:37.803 CXX test/cpp_headers/idxd_spec.o 00:04:37.803 CXX test/cpp_headers/init.o 00:04:37.803 CXX test/cpp_headers/ioat.o 00:04:37.803 CXX test/cpp_headers/ioat_spec.o 00:04:37.803 CC examples/util/zipf/zipf.o 00:04:37.803 CC app/fio/nvme/fio_plugin.o 00:04:37.803 CC test/thread/poller_perf/poller_perf.o 00:04:37.803 CC test/thread/lock/spdk_lock.o 00:04:37.803 CC test/env/vtophys/vtophys.o 00:04:37.803 CC test/app/histogram_perf/histogram_perf.o 00:04:37.803 CC test/env/pci/pci_ut.o 00:04:37.803 CC test/app/stub/stub.o 00:04:37.803 CXX test/cpp_headers/iscsi_spec.o 00:04:37.803 CC test/app/jsoncat/jsoncat.o 00:04:37.803 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:37.803 CC test/env/memory/memory_ut.o 00:04:37.803 CC test/dma/test_dma/test_dma.o 00:04:37.803 CC app/fio/bdev/fio_plugin.o 00:04:37.803 LINK spdk_lspci 00:04:37.803 LINK rpc_client_test 00:04:37.803 CC test/app/bdev_svc/bdev_svc.o 00:04:37.803 CC test/env/mem_callbacks/mem_callbacks.o 00:04:37.803 LINK spdk_nvme_discover 00:04:37.803 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:37.803 LINK spdk_trace_record 00:04:38.066 LINK interrupt_tgt 00:04:38.066 CXX test/cpp_headers/json.o 00:04:38.066 CXX test/cpp_headers/jsonrpc.o 00:04:38.066 CXX test/cpp_headers/keyring.o 00:04:38.066 CXX test/cpp_headers/keyring_module.o 00:04:38.066 CXX test/cpp_headers/likely.o 00:04:38.066 CXX test/cpp_headers/log.o 00:04:38.066 CXX test/cpp_headers/lvol.o 00:04:38.066 CXX test/cpp_headers/md5.o 00:04:38.066 CXX test/cpp_headers/memory.o 00:04:38.066 CXX test/cpp_headers/mmio.o 00:04:38.066 CXX test/cpp_headers/nbd.o 00:04:38.066 CXX test/cpp_headers/net.o 00:04:38.066 LINK histogram_perf 00:04:38.066 CXX test/cpp_headers/notify.o 00:04:38.066 CXX test/cpp_headers/nvme.o 00:04:38.066 LINK zipf 00:04:38.066 LINK poller_perf 00:04:38.066 CXX test/cpp_headers/nvme_intel.o 00:04:38.066 CXX test/cpp_headers/nvme_ocssd.o 00:04:38.066 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:38.066 LINK jsoncat 00:04:38.066 CXX test/cpp_headers/nvme_zns.o 00:04:38.066 CXX test/cpp_headers/nvme_spec.o 00:04:38.066 CXX test/cpp_headers/nvmf_cmd.o 00:04:38.066 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:38.066 CXX test/cpp_headers/nvmf.o 00:04:38.066 CXX test/cpp_headers/nvmf_spec.o 00:04:38.066 CXX test/cpp_headers/nvmf_transport.o 00:04:38.066 LINK vtophys 00:04:38.066 CXX test/cpp_headers/opal.o 00:04:38.066 CXX test/cpp_headers/opal_spec.o 00:04:38.066 CXX test/cpp_headers/pci_ids.o 00:04:38.066 CXX test/cpp_headers/pipe.o 00:04:38.066 LINK nvmf_tgt 00:04:38.066 CXX test/cpp_headers/queue.o 00:04:38.066 CXX test/cpp_headers/reduce.o 00:04:38.066 CXX test/cpp_headers/rpc.o 00:04:38.066 CXX test/cpp_headers/scheduler.o 00:04:38.066 CXX test/cpp_headers/scsi.o 00:04:38.066 CXX test/cpp_headers/scsi_spec.o 00:04:38.066 LINK env_dpdk_post_init 00:04:38.066 CXX test/cpp_headers/sock.o 00:04:38.066 LINK verify 00:04:38.066 CXX test/cpp_headers/stdinc.o 00:04:38.066 CXX test/cpp_headers/string.o 00:04:38.066 LINK iscsi_tgt 00:04:38.066 CXX test/cpp_headers/thread.o 00:04:38.066 LINK ioat_perf 00:04:38.066 LINK stub 00:04:38.066 CXX test/cpp_headers/trace.o 00:04:38.066 LINK spdk_tgt 00:04:38.066 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:38.066 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:38.066 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:04:38.066 LINK bdev_svc 00:04:38.067 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:04:38.067 LINK spdk_trace 00:04:38.067 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:38.067 CXX test/cpp_headers/trace_parser.o 00:04:38.067 CXX test/cpp_headers/tree.o 00:04:38.067 CXX test/cpp_headers/ublk.o 00:04:38.067 CXX test/cpp_headers/util.o 00:04:38.067 CXX test/cpp_headers/uuid.o 00:04:38.067 CXX test/cpp_headers/version.o 00:04:38.067 CXX test/cpp_headers/vfio_user_pci.o 00:04:38.067 CXX test/cpp_headers/vfio_user_spec.o 00:04:38.067 CXX test/cpp_headers/vhost.o 00:04:38.325 CXX test/cpp_headers/vmd.o 00:04:38.325 CXX test/cpp_headers/xor.o 00:04:38.325 CXX test/cpp_headers/zipf.o 00:04:38.325 LINK pci_ut 00:04:38.325 LINK spdk_dd 00:04:38.325 LINK nvme_fuzz 00:04:38.325 LINK test_dma 00:04:38.325 LINK spdk_nvme 00:04:38.584 LINK llvm_vfio_fuzz 00:04:38.584 LINK spdk_nvme_identify 00:04:38.584 LINK spdk_bdev 00:04:38.584 LINK mem_callbacks 00:04:38.584 LINK spdk_top 00:04:38.584 LINK vhost_fuzz 00:04:38.584 LINK spdk_nvme_perf 00:04:38.584 CC examples/idxd/perf/perf.o 00:04:38.584 CC examples/vmd/led/led.o 00:04:38.584 CC examples/vmd/lsvmd/lsvmd.o 00:04:38.584 CC examples/sock/hello_world/hello_sock.o 00:04:38.842 CC examples/thread/thread/thread_ex.o 00:04:38.842 CC app/vhost/vhost.o 00:04:38.842 LINK llvm_nvme_fuzz 00:04:38.842 LINK led 00:04:38.842 LINK lsvmd 00:04:38.842 LINK hello_sock 00:04:38.842 LINK memory_ut 00:04:38.842 LINK idxd_perf 00:04:38.842 LINK thread 00:04:38.842 LINK vhost 00:04:39.101 LINK spdk_lock 00:04:39.359 LINK iscsi_fuzz 00:04:39.617 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:39.617 CC examples/nvme/hotplug/hotplug.o 00:04:39.617 CC examples/nvme/reconnect/reconnect.o 00:04:39.617 CC examples/nvme/hello_world/hello_world.o 00:04:39.617 CC examples/nvme/abort/abort.o 00:04:39.617 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:39.617 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:39.617 CC examples/nvme/arbitration/arbitration.o 00:04:39.617 CC test/event/reactor/reactor.o 00:04:39.617 CC test/event/reactor_perf/reactor_perf.o 00:04:39.617 CC test/event/app_repeat/app_repeat.o 00:04:39.617 CC test/event/event_perf/event_perf.o 00:04:39.617 LINK pmr_persistence 00:04:39.617 CC test/event/scheduler/scheduler.o 00:04:39.617 LINK hotplug 00:04:39.617 LINK hello_world 00:04:39.617 LINK cmb_copy 00:04:39.876 LINK reactor 00:04:39.876 LINK reactor_perf 00:04:39.876 LINK event_perf 00:04:39.876 LINK reconnect 00:04:39.876 LINK app_repeat 00:04:39.876 LINK abort 00:04:39.876 LINK arbitration 00:04:39.876 LINK nvme_manage 00:04:39.876 LINK scheduler 00:04:40.132 CC test/nvme/aer/aer.o 00:04:40.132 CC test/nvme/startup/startup.o 00:04:40.132 CC test/nvme/simple_copy/simple_copy.o 00:04:40.132 CC test/nvme/fdp/fdp.o 00:04:40.132 CC test/nvme/reserve/reserve.o 00:04:40.132 CC test/nvme/reset/reset.o 00:04:40.132 CC test/nvme/boot_partition/boot_partition.o 00:04:40.132 CC test/nvme/compliance/nvme_compliance.o 00:04:40.132 CC test/nvme/connect_stress/connect_stress.o 00:04:40.132 CC test/nvme/fused_ordering/fused_ordering.o 00:04:40.132 CC test/nvme/overhead/overhead.o 00:04:40.132 CC test/nvme/sgl/sgl.o 00:04:40.132 CC test/nvme/err_injection/err_injection.o 00:04:40.132 CC test/nvme/e2edp/nvme_dp.o 00:04:40.132 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:40.132 CC test/nvme/cuse/cuse.o 00:04:40.132 CC test/blobfs/mkfs/mkfs.o 00:04:40.132 CC test/accel/dif/dif.o 00:04:40.132 CC test/lvol/esnap/esnap.o 00:04:40.132 LINK startup 00:04:40.132 LINK boot_partition 00:04:40.132 LINK reserve 00:04:40.132 LINK connect_stress 00:04:40.132 LINK doorbell_aers 00:04:40.132 LINK err_injection 00:04:40.132 LINK fused_ordering 00:04:40.132 LINK simple_copy 00:04:40.132 LINK reset 00:04:40.132 LINK fdp 00:04:40.389 LINK nvme_dp 00:04:40.389 LINK sgl 00:04:40.389 LINK overhead 00:04:40.389 LINK aer 00:04:40.389 LINK mkfs 00:04:40.389 LINK nvme_compliance 00:04:40.648 LINK dif 00:04:40.648 CC examples/accel/perf/accel_perf.o 00:04:40.648 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:40.648 CC examples/blob/hello_world/hello_blob.o 00:04:40.906 CC examples/blob/cli/blobcli.o 00:04:40.906 LINK hello_blob 00:04:40.906 LINK hello_fsdev 00:04:40.906 LINK cuse 00:04:41.165 LINK accel_perf 00:04:41.165 LINK blobcli 00:04:41.733 CC examples/bdev/bdevperf/bdevperf.o 00:04:41.733 CC examples/bdev/hello_world/hello_bdev.o 00:04:41.991 LINK hello_bdev 00:04:41.991 CC test/bdev/bdevio/bdevio.o 00:04:42.250 LINK bdevperf 00:04:42.509 LINK bdevio 00:04:43.887 LINK esnap 00:04:43.887 CC examples/nvmf/nvmf/nvmf.o 00:04:44.148 LINK nvmf 00:04:45.529 00:04:45.529 real 0m46.290s 00:04:45.529 user 6m55.005s 00:04:45.529 sys 2m21.725s 00:04:45.529 20:24:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:45.529 20:24:38 make -- common/autotest_common.sh@10 -- $ set +x 00:04:45.529 ************************************ 00:04:45.529 END TEST make 00:04:45.529 ************************************ 00:04:45.529 20:24:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:45.529 20:24:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:45.529 20:24:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:45.529 20:24:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.529 20:24:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:45.529 20:24:38 -- pm/common@44 -- $ pid=1726243 00:04:45.529 20:24:38 -- pm/common@50 -- $ kill -TERM 1726243 00:04:45.529 20:24:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.529 20:24:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:45.529 20:24:38 -- pm/common@44 -- $ pid=1726245 00:04:45.529 20:24:38 -- pm/common@50 -- $ kill -TERM 1726245 00:04:45.529 20:24:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.529 20:24:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:45.529 20:24:38 -- pm/common@44 -- $ pid=1726247 00:04:45.529 20:24:38 -- pm/common@50 -- $ kill -TERM 1726247 00:04:45.529 20:24:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.529 20:24:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:45.529 20:24:38 -- pm/common@44 -- $ pid=1726277 00:04:45.529 20:24:38 -- pm/common@50 -- $ sudo -E kill -TERM 1726277 00:04:45.529 20:24:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:45.529 20:24:38 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:04:45.529 20:24:38 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.529 20:24:38 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.529 20:24:38 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.529 20:24:38 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.529 20:24:38 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.529 20:24:38 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.529 20:24:38 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.529 20:24:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.529 20:24:38 -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.529 20:24:38 -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.529 20:24:38 -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.529 20:24:38 -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.529 20:24:38 -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.529 20:24:38 -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.529 20:24:38 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.529 20:24:38 -- scripts/common.sh@344 -- # case "$op" in 00:04:45.529 20:24:38 -- scripts/common.sh@345 -- # : 1 00:04:45.529 20:24:38 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.529 20:24:38 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.529 20:24:38 -- scripts/common.sh@365 -- # decimal 1 00:04:45.529 20:24:38 -- scripts/common.sh@353 -- # local d=1 00:04:45.529 20:24:38 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.529 20:24:38 -- scripts/common.sh@355 -- # echo 1 00:04:45.529 20:24:38 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.530 20:24:38 -- scripts/common.sh@366 -- # decimal 2 00:04:45.530 20:24:38 -- scripts/common.sh@353 -- # local d=2 00:04:45.530 20:24:38 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.530 20:24:38 -- scripts/common.sh@355 -- # echo 2 00:04:45.530 20:24:38 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.530 20:24:38 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.530 20:24:38 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.530 20:24:38 -- scripts/common.sh@368 -- # return 0 00:04:45.530 20:24:38 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.530 20:24:38 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.530 --rc genhtml_branch_coverage=1 00:04:45.530 --rc genhtml_function_coverage=1 00:04:45.530 --rc genhtml_legend=1 00:04:45.530 --rc geninfo_all_blocks=1 00:04:45.530 --rc geninfo_unexecuted_blocks=1 00:04:45.530 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.530 ' 00:04:45.530 20:24:38 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.530 --rc genhtml_branch_coverage=1 00:04:45.530 --rc genhtml_function_coverage=1 00:04:45.530 --rc genhtml_legend=1 00:04:45.530 --rc geninfo_all_blocks=1 00:04:45.530 --rc geninfo_unexecuted_blocks=1 00:04:45.530 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.530 ' 00:04:45.530 20:24:38 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.530 --rc genhtml_branch_coverage=1 00:04:45.530 --rc genhtml_function_coverage=1 00:04:45.530 --rc genhtml_legend=1 00:04:45.530 --rc geninfo_all_blocks=1 00:04:45.530 --rc geninfo_unexecuted_blocks=1 00:04:45.530 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.530 ' 00:04:45.530 20:24:38 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.530 --rc genhtml_branch_coverage=1 00:04:45.530 --rc genhtml_function_coverage=1 00:04:45.530 --rc genhtml_legend=1 00:04:45.530 --rc geninfo_all_blocks=1 00:04:45.530 --rc geninfo_unexecuted_blocks=1 00:04:45.530 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:45.530 ' 00:04:45.530 20:24:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.530 20:24:38 -- nvmf/common.sh@7 -- # uname -s 00:04:45.530 20:24:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.530 20:24:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.530 20:24:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.530 20:24:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.530 20:24:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.530 20:24:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.530 20:24:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.530 20:24:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.530 20:24:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.530 20:24:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.530 20:24:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:45.530 20:24:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:45.530 20:24:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.530 20:24:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.530 20:24:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.530 20:24:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.530 20:24:38 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:45.530 20:24:38 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.530 20:24:38 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.530 20:24:38 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.530 20:24:38 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.530 20:24:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.530 20:24:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.530 20:24:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.530 20:24:38 -- paths/export.sh@5 -- # export PATH 00:04:45.530 20:24:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.530 20:24:38 -- nvmf/common.sh@51 -- # : 0 00:04:45.530 20:24:38 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.530 20:24:38 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.530 20:24:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.530 20:24:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.530 20:24:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.530 20:24:38 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.530 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.530 20:24:38 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.530 20:24:38 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.530 20:24:38 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.530 20:24:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:45.530 20:24:38 -- spdk/autotest.sh@32 -- # uname -s 00:04:45.530 20:24:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:45.530 20:24:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:45.530 20:24:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:04:45.530 20:24:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:45.530 20:24:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:04:45.530 20:24:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:45.530 20:24:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:45.530 20:24:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:45.530 20:24:38 -- spdk/autotest.sh@48 -- # udevadm_pid=1786303 00:04:45.530 20:24:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:45.530 20:24:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:45.530 20:24:38 -- pm/common@17 -- # local monitor 00:04:45.530 20:24:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.530 20:24:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.530 20:24:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.530 20:24:38 -- pm/common@21 -- # date +%s 00:04:45.530 20:24:38 -- pm/common@21 -- # date +%s 00:04:45.530 20:24:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.530 20:24:38 -- pm/common@25 -- # sleep 1 00:04:45.530 20:24:38 -- pm/common@21 -- # date +%s 00:04:45.530 20:24:38 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426678 00:04:45.530 20:24:38 -- pm/common@21 -- # date +%s 00:04:45.530 20:24:38 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426678 00:04:45.530 20:24:38 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426678 00:04:45.530 20:24:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733426678 00:04:45.791 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426678_collect-cpu-load.pm.log 00:04:45.791 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426678_collect-vmstat.pm.log 00:04:45.791 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426678_collect-cpu-temp.pm.log 00:04:45.791 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733426678_collect-bmc-pm.bmc.pm.log 00:04:46.730 20:24:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:46.730 20:24:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:46.730 20:24:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.730 20:24:39 -- common/autotest_common.sh@10 -- # set +x 00:04:46.730 20:24:39 -- spdk/autotest.sh@59 -- # create_test_list 00:04:46.730 20:24:39 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:46.730 20:24:39 -- common/autotest_common.sh@10 -- # set +x 00:04:46.730 20:24:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:04:46.730 20:24:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:46.730 20:24:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:46.730 20:24:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:04:46.730 20:24:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:46.730 20:24:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:46.730 20:24:40 -- common/autotest_common.sh@1457 -- # uname 00:04:46.730 20:24:40 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:46.730 20:24:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:46.730 20:24:40 -- common/autotest_common.sh@1477 -- # uname 00:04:46.730 20:24:40 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:46.730 20:24:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:46.730 20:24:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:04:46.730 lcov: LCOV version 1.15 00:04:46.730 20:24:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:04:54.852 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.130 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:05:03.422 20:24:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:03.422 20:24:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.422 20:24:56 -- common/autotest_common.sh@10 -- # set +x 00:05:03.422 20:24:56 -- spdk/autotest.sh@78 -- # rm -f 00:05:03.422 20:24:56 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.714 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:05:06.714 0000:af:00.0 (8086 2701): Already using the nvme driver 00:05:06.714 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:b0:00.0 (8086 2701): Already using the nvme driver 00:05:06.714 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:06.714 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:06.714 20:25:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:06.714 20:25:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:06.714 20:25:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:06.714 20:25:00 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:06.714 20:25:00 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:06.714 20:25:00 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:06.714 20:25:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.714 20:25:00 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:05:06.714 20:25:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.714 20:25:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:06.714 20:25:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:06.714 20:25:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.714 20:25:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.714 20:25:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.714 20:25:00 -- common/autotest_common.sh@1669 -- # bdf=0000:af:00.0 00:05:06.714 20:25:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.714 20:25:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:06.714 20:25:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:06.714 20:25:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.714 20:25:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.714 20:25:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:06.714 20:25:00 -- common/autotest_common.sh@1669 -- # bdf=0000:b0:00.0 00:05:06.714 20:25:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:06.714 20:25:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:06.714 20:25:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:06.714 20:25:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:06.714 20:25:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:06.714 20:25:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:06.714 20:25:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.714 20:25:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.714 20:25:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:06.714 20:25:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:06.714 20:25:00 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:06.973 No valid GPT data, bailing 00:05:06.973 20:25:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.973 20:25:00 -- scripts/common.sh@394 -- # pt= 00:05:06.973 20:25:00 -- scripts/common.sh@395 -- # return 1 00:05:06.973 20:25:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:06.973 1+0 records in 00:05:06.973 1+0 records out 00:05:06.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462086 s, 227 MB/s 00:05:06.973 20:25:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.973 20:25:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.973 20:25:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:06.973 20:25:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:06.973 20:25:00 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:06.973 No valid GPT data, bailing 00:05:06.973 20:25:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:06.973 20:25:00 -- scripts/common.sh@394 -- # pt= 00:05:06.973 20:25:00 -- scripts/common.sh@395 -- # return 1 00:05:06.973 20:25:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:06.973 1+0 records in 00:05:06.973 1+0 records out 00:05:06.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450107 s, 233 MB/s 00:05:06.973 20:25:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:06.973 20:25:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:06.973 20:25:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:06.973 20:25:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:06.973 20:25:00 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:06.974 No valid GPT data, bailing 00:05:06.974 20:25:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:06.974 20:25:00 -- scripts/common.sh@394 -- # pt= 00:05:06.974 20:25:00 -- scripts/common.sh@395 -- # return 1 00:05:06.974 20:25:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:06.974 1+0 records in 00:05:06.974 1+0 records out 00:05:06.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470779 s, 223 MB/s 00:05:06.974 20:25:00 -- spdk/autotest.sh@105 -- # sync 00:05:06.974 20:25:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:06.974 20:25:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:06.974 20:25:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:12.247 20:25:05 -- spdk/autotest.sh@111 -- # uname -s 00:05:12.247 20:25:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:12.247 20:25:05 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:05:12.247 20:25:05 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:12.247 20:25:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.247 20:25:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.247 20:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:12.247 ************************************ 00:05:12.247 START TEST setup.sh 00:05:12.247 ************************************ 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:12.247 * Looking for test storage... 00:05:12.247 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@345 -- # : 1 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@353 -- # local d=1 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@355 -- # echo 1 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@353 -- # local d=2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@355 -- # echo 2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.247 20:25:05 setup.sh -- scripts/common.sh@368 -- # return 0 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.247 --rc genhtml_branch_coverage=1 00:05:12.247 --rc genhtml_function_coverage=1 00:05:12.247 --rc genhtml_legend=1 00:05:12.247 --rc geninfo_all_blocks=1 00:05:12.247 --rc geninfo_unexecuted_blocks=1 00:05:12.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.247 ' 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.247 --rc genhtml_branch_coverage=1 00:05:12.247 --rc genhtml_function_coverage=1 00:05:12.247 --rc genhtml_legend=1 00:05:12.247 --rc geninfo_all_blocks=1 00:05:12.247 --rc geninfo_unexecuted_blocks=1 00:05:12.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.247 ' 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.247 --rc genhtml_branch_coverage=1 00:05:12.247 --rc genhtml_function_coverage=1 00:05:12.247 --rc genhtml_legend=1 00:05:12.247 --rc geninfo_all_blocks=1 00:05:12.247 --rc geninfo_unexecuted_blocks=1 00:05:12.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.247 ' 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.247 --rc genhtml_branch_coverage=1 00:05:12.247 --rc genhtml_function_coverage=1 00:05:12.247 --rc genhtml_legend=1 00:05:12.247 --rc geninfo_all_blocks=1 00:05:12.247 --rc geninfo_unexecuted_blocks=1 00:05:12.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.247 ' 00:05:12.247 20:25:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:12.247 20:25:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:12.247 20:25:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.247 20:25:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:12.247 ************************************ 00:05:12.247 START TEST acl 00:05:12.247 ************************************ 00:05:12.247 20:25:05 setup.sh.acl -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:12.247 * Looking for test storage... 00:05:12.247 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:12.247 20:25:05 setup.sh.acl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.247 20:25:05 setup.sh.acl -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.247 20:25:05 setup.sh.acl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.247 20:25:05 setup.sh.acl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:05:12.247 20:25:05 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.248 20:25:05 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.248 --rc genhtml_branch_coverage=1 00:05:12.248 --rc genhtml_function_coverage=1 00:05:12.248 --rc genhtml_legend=1 00:05:12.248 --rc geninfo_all_blocks=1 00:05:12.248 --rc geninfo_unexecuted_blocks=1 00:05:12.248 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.248 ' 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.248 --rc genhtml_branch_coverage=1 00:05:12.248 --rc genhtml_function_coverage=1 00:05:12.248 --rc genhtml_legend=1 00:05:12.248 --rc geninfo_all_blocks=1 00:05:12.248 --rc geninfo_unexecuted_blocks=1 00:05:12.248 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.248 ' 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.248 --rc genhtml_branch_coverage=1 00:05:12.248 --rc genhtml_function_coverage=1 00:05:12.248 --rc genhtml_legend=1 00:05:12.248 --rc geninfo_all_blocks=1 00:05:12.248 --rc geninfo_unexecuted_blocks=1 00:05:12.248 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.248 ' 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.248 --rc genhtml_branch_coverage=1 00:05:12.248 --rc genhtml_function_coverage=1 00:05:12.248 --rc genhtml_legend=1 00:05:12.248 --rc geninfo_all_blocks=1 00:05:12.248 --rc geninfo_unexecuted_blocks=1 00:05:12.248 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.248 ' 00:05:12.248 20:25:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:af:00.0 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # bdf=0000:b0:00.0 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:12.248 20:25:05 setup.sh.acl -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:12.248 20:25:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:12.248 20:25:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:12.248 20:25:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:12.248 20:25:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:12.248 20:25:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:12.248 20:25:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.248 20:25:05 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.441 20:25:09 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:16.441 20:25:09 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:16.441 20:25:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:16.441 20:25:09 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:16.441 20:25:09 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.441 20:25:09 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:19.732 Hugepages 00:05:19.732 node hugesize free / total 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 00:05:19.732 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.732 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:af:00.0 == *:*:*.* ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:b0:00.0 == *:*:*.* ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:05:19.733 20:25:12 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:19.733 20:25:12 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.733 20:25:12 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.733 20:25:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:19.733 ************************************ 00:05:19.733 START TEST denied 00:05:19.733 ************************************ 00:05:19.733 20:25:12 setup.sh.acl.denied -- common/autotest_common.sh@1129 -- # denied 00:05:19.733 20:25:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:05:19.733 20:25:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:19.733 20:25:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.733 20:25:12 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:19.733 20:25:12 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:05:23.927 0000:5e:00.0 (144d a80a): Skipping denied controller at 0000:5e:00.0 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.927 20:25:16 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.252 00:05:28.252 real 0m8.330s 00:05:28.252 user 0m2.475s 00:05:28.252 sys 0m5.042s 00:05:28.252 20:25:21 setup.sh.acl.denied -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.252 20:25:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:28.252 ************************************ 00:05:28.252 END TEST denied 00:05:28.252 ************************************ 00:05:28.252 20:25:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:28.252 20:25:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.252 20:25:21 setup.sh.acl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.252 20:25:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:28.252 ************************************ 00:05:28.252 START TEST allowed 00:05:28.252 ************************************ 00:05:28.252 20:25:21 setup.sh.acl.allowed -- common/autotest_common.sh@1129 -- # allowed 00:05:28.252 20:25:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:05:28.252 20:25:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:28.252 20:25:21 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:05:28.252 20:25:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.252 20:25:21 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:33.529 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:05:33.529 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:af:00.0 0000:b0:00.0 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:af:00.0 ]] 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/driver 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:b0:00.0 ]] 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:b0:00.0/driver 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.530 20:25:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:37.728 00:05:37.728 real 0m9.202s 00:05:37.728 user 0m2.556s 00:05:37.728 sys 0m4.947s 00:05:37.728 20:25:30 setup.sh.acl.allowed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.728 20:25:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 ************************************ 00:05:37.728 END TEST allowed 00:05:37.728 ************************************ 00:05:37.728 00:05:37.728 real 0m25.159s 00:05:37.728 user 0m7.651s 00:05:37.728 sys 0m15.083s 00:05:37.728 20:25:30 setup.sh.acl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.728 20:25:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 ************************************ 00:05:37.728 END TEST acl 00:05:37.728 ************************************ 00:05:37.728 20:25:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:05:37.728 20:25:30 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.728 20:25:30 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.728 20:25:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:37.728 ************************************ 00:05:37.728 START TEST hugepages 00:05:37.728 ************************************ 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:05:37.728 * Looking for test storage... 00:05:37.728 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.728 20:25:30 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.728 --rc genhtml_branch_coverage=1 00:05:37.728 --rc genhtml_function_coverage=1 00:05:37.728 --rc genhtml_legend=1 00:05:37.728 --rc geninfo_all_blocks=1 00:05:37.728 --rc geninfo_unexecuted_blocks=1 00:05:37.728 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.728 ' 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.728 --rc genhtml_branch_coverage=1 00:05:37.728 --rc genhtml_function_coverage=1 00:05:37.728 --rc genhtml_legend=1 00:05:37.728 --rc geninfo_all_blocks=1 00:05:37.728 --rc geninfo_unexecuted_blocks=1 00:05:37.728 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.728 ' 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.728 --rc genhtml_branch_coverage=1 00:05:37.728 --rc genhtml_function_coverage=1 00:05:37.728 --rc genhtml_legend=1 00:05:37.728 --rc geninfo_all_blocks=1 00:05:37.728 --rc geninfo_unexecuted_blocks=1 00:05:37.728 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.728 ' 00:05:37.728 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.728 --rc genhtml_branch_coverage=1 00:05:37.728 --rc genhtml_function_coverage=1 00:05:37.728 --rc genhtml_legend=1 00:05:37.728 --rc geninfo_all_blocks=1 00:05:37.728 --rc geninfo_unexecuted_blocks=1 00:05:37.728 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:37.728 ' 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:37.728 20:25:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 35298508 kB' 'MemAvailable: 40405664 kB' 'Buffers: 4304 kB' 'Cached: 17182828 kB' 'SwapCached: 0 kB' 'Active: 13822324 kB' 'Inactive: 4050784 kB' 'Active(anon): 12646400 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689380 kB' 'Mapped: 195524 kB' 'Shmem: 11960424 kB' 'KReclaimable: 528120 kB' 'Slab: 1255344 kB' 'SReclaimable: 528120 kB' 'SUnreclaim: 727224 kB' 'KernelStack: 16448 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36435200 kB' 'Committed_AS: 13919864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205396 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.729 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.730 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:05:37.731 20:25:30 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:05:37.731 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.731 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.731 20:25:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:37.731 ************************************ 00:05:37.731 START TEST single_node_setup 00:05:37.731 ************************************ 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1129 -- # single_node_setup 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.731 20:25:30 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:41.021 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.021 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.022 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.280 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:05:41.281 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:05:41.281 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:05:41.281 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:41.281 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37489288 kB' 'MemAvailable: 42595996 kB' 'Buffers: 4304 kB' 'Cached: 17182940 kB' 'SwapCached: 0 kB' 'Active: 13825548 kB' 'Inactive: 4050784 kB' 'Active(anon): 12649624 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692348 kB' 'Mapped: 195932 kB' 'Shmem: 11960536 kB' 'KReclaimable: 527672 kB' 'Slab: 1252768 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 725096 kB' 'KernelStack: 16816 kB' 'PageTables: 9560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13929420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205524 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.545 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.546 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37488644 kB' 'MemAvailable: 42595352 kB' 'Buffers: 4304 kB' 'Cached: 17182940 kB' 'SwapCached: 0 kB' 'Active: 13825024 kB' 'Inactive: 4050784 kB' 'Active(anon): 12649100 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 691832 kB' 'Mapped: 195888 kB' 'Shmem: 11960536 kB' 'KReclaimable: 527672 kB' 'Slab: 1252788 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 725116 kB' 'KernelStack: 16752 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13929440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205540 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.547 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.548 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:05:41.549 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37488180 kB' 'MemAvailable: 42594888 kB' 'Buffers: 4304 kB' 'Cached: 17182956 kB' 'SwapCached: 0 kB' 'Active: 13825484 kB' 'Inactive: 4050784 kB' 'Active(anon): 12649560 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 692316 kB' 'Mapped: 195888 kB' 'Shmem: 11960552 kB' 'KReclaimable: 527672 kB' 'Slab: 1252788 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 725116 kB' 'KernelStack: 16720 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13933440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.550 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.551 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:05:41.552 nr_hugepages=1024 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:41.552 resv_hugepages=0 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:41.552 surplus_hugepages=0 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:41.552 anon_hugepages=0 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37486128 kB' 'MemAvailable: 42592836 kB' 'Buffers: 4304 kB' 'Cached: 17182984 kB' 'SwapCached: 0 kB' 'Active: 13824804 kB' 'Inactive: 4050784 kB' 'Active(anon): 12648880 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 691604 kB' 'Mapped: 195888 kB' 'Shmem: 11960580 kB' 'KReclaimable: 527672 kB' 'Slab: 1252788 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 725116 kB' 'KernelStack: 16688 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13929484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205636 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.552 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.553 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.554 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32631052 kB' 'MemFree: 20552420 kB' 'MemUsed: 12078632 kB' 'SwapCached: 0 kB' 'Active: 7985372 kB' 'Inactive: 217696 kB' 'Active(anon): 7297092 kB' 'Inactive(anon): 0 kB' 'Active(file): 688280 kB' 'Inactive(file): 217696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7856936 kB' 'Mapped: 113344 kB' 'AnonPages: 349328 kB' 'Shmem: 6950960 kB' 'KernelStack: 8600 kB' 'PageTables: 5084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279956 kB' 'Slab: 707396 kB' 'SReclaimable: 279956 kB' 'SUnreclaim: 427440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.555 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.816 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:05:41.817 node0=1024 expecting 1024 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.817 00:05:41.817 real 0m4.002s 00:05:41.817 user 0m1.568s 00:05:41.817 sys 0m2.493s 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.817 20:25:34 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:05:41.817 ************************************ 00:05:41.817 END TEST single_node_setup 00:05:41.817 ************************************ 00:05:41.817 20:25:35 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:05:41.817 20:25:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.817 20:25:35 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.817 20:25:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:41.817 ************************************ 00:05:41.817 START TEST even_2G_alloc 00:05:41.817 ************************************ 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1129 -- # even_2G_alloc 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.817 20:25:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:45.104 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:05:45.104 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:05:45.104 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:05:45.104 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:45.104 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.104 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37499208 kB' 'MemAvailable: 42605916 kB' 'Buffers: 4304 kB' 'Cached: 17183080 kB' 'SwapCached: 0 kB' 'Active: 13823204 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647280 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689852 kB' 'Mapped: 194692 kB' 'Shmem: 11960676 kB' 'KReclaimable: 527672 kB' 'Slab: 1252524 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 724852 kB' 'KernelStack: 16544 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13914892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205540 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.105 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.371 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37498956 kB' 'MemAvailable: 42605664 kB' 'Buffers: 4304 kB' 'Cached: 17183084 kB' 'SwapCached: 0 kB' 'Active: 13822956 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647032 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689588 kB' 'Mapped: 194576 kB' 'Shmem: 11960680 kB' 'KReclaimable: 527672 kB' 'Slab: 1252492 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 724820 kB' 'KernelStack: 16544 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13913884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205524 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.372 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.373 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.374 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37498704 kB' 'MemAvailable: 42605412 kB' 'Buffers: 4304 kB' 'Cached: 17183104 kB' 'SwapCached: 0 kB' 'Active: 13822952 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647028 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689584 kB' 'Mapped: 194576 kB' 'Shmem: 11960700 kB' 'KReclaimable: 527672 kB' 'Slab: 1252492 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 724820 kB' 'KernelStack: 16544 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13913908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205524 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.375 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.376 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.377 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:05:45.378 nr_hugepages=1024 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:45.378 resv_hugepages=0 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:45.378 surplus_hugepages=0 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:45.378 anon_hugepages=0 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37498504 kB' 'MemAvailable: 42605212 kB' 'Buffers: 4304 kB' 'Cached: 17183144 kB' 'SwapCached: 0 kB' 'Active: 13822608 kB' 'Inactive: 4050784 kB' 'Active(anon): 12646684 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689160 kB' 'Mapped: 194576 kB' 'Shmem: 11960740 kB' 'KReclaimable: 527672 kB' 'Slab: 1252492 kB' 'SReclaimable: 527672 kB' 'SUnreclaim: 724820 kB' 'KernelStack: 16528 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13913928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205524 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.378 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.379 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.380 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32631052 kB' 'MemFree: 21617752 kB' 'MemUsed: 11013300 kB' 'SwapCached: 0 kB' 'Active: 7982312 kB' 'Inactive: 217696 kB' 'Active(anon): 7294032 kB' 'Inactive(anon): 0 kB' 'Active(file): 688280 kB' 'Inactive(file): 217696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7856944 kB' 'Mapped: 112924 kB' 'AnonPages: 346192 kB' 'Shmem: 6950968 kB' 'KernelStack: 8360 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279956 kB' 'Slab: 707168 kB' 'SReclaimable: 279956 kB' 'SUnreclaim: 427212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.381 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.382 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656448 kB' 'MemFree: 15878988 kB' 'MemUsed: 11777460 kB' 'SwapCached: 0 kB' 'Active: 5840824 kB' 'Inactive: 3833088 kB' 'Active(anon): 5353180 kB' 'Inactive(anon): 0 kB' 'Active(file): 487644 kB' 'Inactive(file): 3833088 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9330528 kB' 'Mapped: 81764 kB' 'AnonPages: 343492 kB' 'Shmem: 5009796 kB' 'KernelStack: 8152 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 247716 kB' 'Slab: 545324 kB' 'SReclaimable: 247716 kB' 'SUnreclaim: 297608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.383 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.384 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:05:45.385 node0=512 expecting 512 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:05:45.385 node1=512 expecting 512 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:05:45.385 00:05:45.385 real 0m3.677s 00:05:45.385 user 0m1.375s 00:05:45.385 sys 0m2.362s 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.385 20:25:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:45.385 ************************************ 00:05:45.385 END TEST even_2G_alloc 00:05:45.385 ************************************ 00:05:45.385 20:25:38 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:05:45.385 20:25:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.385 20:25:38 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.385 20:25:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:45.646 ************************************ 00:05:45.646 START TEST odd_alloc 00:05:45.646 ************************************ 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1129 -- # odd_alloc 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.646 20:25:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:48.944 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:05:48.944 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:05:48.944 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:05:48.944 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:48.944 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37494724 kB' 'MemAvailable: 42601464 kB' 'Buffers: 4304 kB' 'Cached: 17183248 kB' 'SwapCached: 0 kB' 'Active: 13821760 kB' 'Inactive: 4050784 kB' 'Active(anon): 12645836 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687792 kB' 'Mapped: 194688 kB' 'Shmem: 11960844 kB' 'KReclaimable: 527704 kB' 'Slab: 1252372 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724668 kB' 'KernelStack: 16544 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37482752 kB' 'Committed_AS: 13914608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205588 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.944 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.945 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37495188 kB' 'MemAvailable: 42601928 kB' 'Buffers: 4304 kB' 'Cached: 17183252 kB' 'SwapCached: 0 kB' 'Active: 13821424 kB' 'Inactive: 4050784 kB' 'Active(anon): 12645500 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687856 kB' 'Mapped: 194592 kB' 'Shmem: 11960848 kB' 'KReclaimable: 527704 kB' 'Slab: 1252368 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724664 kB' 'KernelStack: 16544 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37482752 kB' 'Committed_AS: 13914624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205572 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.946 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.947 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.948 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37495348 kB' 'MemAvailable: 42602088 kB' 'Buffers: 4304 kB' 'Cached: 17183272 kB' 'SwapCached: 0 kB' 'Active: 13821656 kB' 'Inactive: 4050784 kB' 'Active(anon): 12645732 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688060 kB' 'Mapped: 194592 kB' 'Shmem: 11960868 kB' 'KReclaimable: 527704 kB' 'Slab: 1252368 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724664 kB' 'KernelStack: 16560 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37482752 kB' 'Committed_AS: 13914644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205572 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.949 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.950 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:05:48.951 nr_hugepages=1025 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:48.951 resv_hugepages=0 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:48.951 surplus_hugepages=0 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:48.951 anon_hugepages=0 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37494592 kB' 'MemAvailable: 42601332 kB' 'Buffers: 4304 kB' 'Cached: 17183292 kB' 'SwapCached: 0 kB' 'Active: 13821648 kB' 'Inactive: 4050784 kB' 'Active(anon): 12645724 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688064 kB' 'Mapped: 194592 kB' 'Shmem: 11960888 kB' 'KReclaimable: 527704 kB' 'Slab: 1252368 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724664 kB' 'KernelStack: 16560 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37482752 kB' 'Committed_AS: 13914664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205588 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.951 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.952 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32631052 kB' 'MemFree: 21612060 kB' 'MemUsed: 11018992 kB' 'SwapCached: 0 kB' 'Active: 7982792 kB' 'Inactive: 217696 kB' 'Active(anon): 7294512 kB' 'Inactive(anon): 0 kB' 'Active(file): 688280 kB' 'Inactive(file): 217696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7856972 kB' 'Mapped: 112940 kB' 'AnonPages: 346632 kB' 'Shmem: 6950996 kB' 'KernelStack: 8376 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279956 kB' 'Slab: 707072 kB' 'SReclaimable: 279956 kB' 'SUnreclaim: 427116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.953 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.954 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:48.955 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.216 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656448 kB' 'MemFree: 15881776 kB' 'MemUsed: 11774672 kB' 'SwapCached: 0 kB' 'Active: 5839236 kB' 'Inactive: 3833088 kB' 'Active(anon): 5351592 kB' 'Inactive(anon): 0 kB' 'Active(file): 487644 kB' 'Inactive(file): 3833088 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9330664 kB' 'Mapped: 81652 kB' 'AnonPages: 341764 kB' 'Shmem: 5009932 kB' 'KernelStack: 8200 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 247748 kB' 'Slab: 545296 kB' 'SReclaimable: 247748 kB' 'SUnreclaim: 297548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.217 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:49.218 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:05:49.219 node0=513 expecting 513 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:05:49.219 node1=512 expecting 512 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:49.219 00:05:49.219 real 0m3.569s 00:05:49.219 user 0m1.342s 00:05:49.219 sys 0m2.250s 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.219 20:25:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:49.219 ************************************ 00:05:49.219 END TEST odd_alloc 00:05:49.219 ************************************ 00:05:49.219 20:25:42 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:05:49.219 20:25:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.219 20:25:42 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.219 20:25:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:49.219 ************************************ 00:05:49.219 START TEST custom_alloc 00:05:49.219 ************************************ 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1129 -- # custom_alloc 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:05:49.219 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.220 20:25:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:52.515 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:05:52.515 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:05:52.515 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:05:52.515 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:52.515 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 36457888 kB' 'MemAvailable: 41564628 kB' 'Buffers: 4304 kB' 'Cached: 17183400 kB' 'SwapCached: 0 kB' 'Active: 13823984 kB' 'Inactive: 4050784 kB' 'Active(anon): 12648060 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 690156 kB' 'Mapped: 194648 kB' 'Shmem: 11960996 kB' 'KReclaimable: 527704 kB' 'Slab: 1251956 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724252 kB' 'KernelStack: 16752 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36959488 kB' 'Committed_AS: 13917788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205796 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.515 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.516 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 36457388 kB' 'MemAvailable: 41564128 kB' 'Buffers: 4304 kB' 'Cached: 17183404 kB' 'SwapCached: 0 kB' 'Active: 13823452 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647528 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689632 kB' 'Mapped: 194632 kB' 'Shmem: 11961000 kB' 'KReclaimable: 527704 kB' 'Slab: 1252020 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724316 kB' 'KernelStack: 16688 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36959488 kB' 'Committed_AS: 13917804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205748 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.517 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.518 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 36457816 kB' 'MemAvailable: 41564556 kB' 'Buffers: 4304 kB' 'Cached: 17183428 kB' 'SwapCached: 0 kB' 'Active: 13822852 kB' 'Inactive: 4050784 kB' 'Active(anon): 12646928 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689088 kB' 'Mapped: 194632 kB' 'Shmem: 11961024 kB' 'KReclaimable: 527704 kB' 'Slab: 1252116 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724412 kB' 'KernelStack: 16576 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36959488 kB' 'Committed_AS: 13915324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205700 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.519 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.520 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.783 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:05:52.784 nr_hugepages=1536 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:52.784 resv_hugepages=0 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:52.784 surplus_hugepages=0 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:52.784 anon_hugepages=0 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.784 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 36458940 kB' 'MemAvailable: 41565680 kB' 'Buffers: 4304 kB' 'Cached: 17183448 kB' 'SwapCached: 0 kB' 'Active: 13822860 kB' 'Inactive: 4050784 kB' 'Active(anon): 12646936 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689092 kB' 'Mapped: 194632 kB' 'Shmem: 11961044 kB' 'KReclaimable: 527704 kB' 'Slab: 1252116 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724412 kB' 'KernelStack: 16576 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36959488 kB' 'Committed_AS: 13915348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205700 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.785 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.786 20:25:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.786 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.786 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32631052 kB' 'MemFree: 21620468 kB' 'MemUsed: 11010584 kB' 'SwapCached: 0 kB' 'Active: 7984432 kB' 'Inactive: 217696 kB' 'Active(anon): 7296152 kB' 'Inactive(anon): 0 kB' 'Active(file): 688280 kB' 'Inactive(file): 217696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7856996 kB' 'Mapped: 112948 kB' 'AnonPages: 348272 kB' 'Shmem: 6951020 kB' 'KernelStack: 8360 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279956 kB' 'Slab: 706872 kB' 'SReclaimable: 279956 kB' 'SUnreclaim: 426916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.787 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:52.788 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656448 kB' 'MemFree: 14838724 kB' 'MemUsed: 12817724 kB' 'SwapCached: 0 kB' 'Active: 5839316 kB' 'Inactive: 3833088 kB' 'Active(anon): 5351672 kB' 'Inactive(anon): 0 kB' 'Active(file): 487644 kB' 'Inactive(file): 3833088 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9330800 kB' 'Mapped: 82296 kB' 'AnonPages: 341672 kB' 'Shmem: 5010068 kB' 'KernelStack: 8184 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 247748 kB' 'Slab: 545244 kB' 'SReclaimable: 247748 kB' 'SUnreclaim: 297496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.789 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:05:52.790 node0=512 expecting 512 00:05:52.790 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:52.791 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:52.791 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:52.791 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:05:52.791 node1=1024 expecting 1024 00:05:52.791 20:25:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:52.791 00:05:52.791 real 0m3.590s 00:05:52.791 user 0m1.383s 00:05:52.791 sys 0m2.249s 00:05:52.791 20:25:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.791 20:25:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:52.791 ************************************ 00:05:52.791 END TEST custom_alloc 00:05:52.791 ************************************ 00:05:52.791 20:25:46 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:52.791 20:25:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.791 20:25:46 setup.sh.hugepages -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.791 20:25:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:52.791 ************************************ 00:05:52.791 START TEST no_shrink_alloc 00:05:52.791 ************************************ 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1129 -- # no_shrink_alloc 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.791 20:25:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:56.082 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:05:56.082 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:05:56.082 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:05:56.082 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:56.082 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.082 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37473716 kB' 'MemAvailable: 42580456 kB' 'Buffers: 4304 kB' 'Cached: 17183556 kB' 'SwapCached: 0 kB' 'Active: 13823544 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647620 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689716 kB' 'Mapped: 194748 kB' 'Shmem: 11961152 kB' 'KReclaimable: 527704 kB' 'Slab: 1251968 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724264 kB' 'KernelStack: 16592 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13915692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.083 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37475004 kB' 'MemAvailable: 42581744 kB' 'Buffers: 4304 kB' 'Cached: 17183560 kB' 'SwapCached: 0 kB' 'Active: 13823620 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647696 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689752 kB' 'Mapped: 194616 kB' 'Shmem: 11961156 kB' 'KReclaimable: 527704 kB' 'Slab: 1251944 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724240 kB' 'KernelStack: 16592 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13915712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205572 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.084 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.085 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.086 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37474524 kB' 'MemAvailable: 42581264 kB' 'Buffers: 4304 kB' 'Cached: 17183576 kB' 'SwapCached: 0 kB' 'Active: 13823332 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647408 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689436 kB' 'Mapped: 194616 kB' 'Shmem: 11961172 kB' 'KReclaimable: 527704 kB' 'Slab: 1251944 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724240 kB' 'KernelStack: 16592 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13915732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205572 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.087 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.088 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:05:56.089 nr_hugepages=1024 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:56.089 resv_hugepages=0 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:56.089 surplus_hugepages=0 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:56.089 anon_hugepages=0 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37474272 kB' 'MemAvailable: 42581012 kB' 'Buffers: 4304 kB' 'Cached: 17183620 kB' 'SwapCached: 0 kB' 'Active: 13823008 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647084 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689044 kB' 'Mapped: 194616 kB' 'Shmem: 11961216 kB' 'KReclaimable: 527704 kB' 'Slab: 1251944 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724240 kB' 'KernelStack: 16576 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13915756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205572 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.089 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.090 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:56.091 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32631052 kB' 'MemFree: 20554000 kB' 'MemUsed: 12077052 kB' 'SwapCached: 0 kB' 'Active: 7983708 kB' 'Inactive: 217696 kB' 'Active(anon): 7295428 kB' 'Inactive(anon): 0 kB' 'Active(file): 688280 kB' 'Inactive(file): 217696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7857012 kB' 'Mapped: 112964 kB' 'AnonPages: 347492 kB' 'Shmem: 6951036 kB' 'KernelStack: 8376 kB' 'PageTables: 4596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279956 kB' 'Slab: 706912 kB' 'SReclaimable: 279956 kB' 'SUnreclaim: 426956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.352 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.353 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:05:56.354 node0=1024 expecting 1024 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.354 20:25:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:59.648 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:05:59.648 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:05:59.648 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:05:59.648 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:59.648 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:59.648 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:59.648 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37497064 kB' 'MemAvailable: 42603804 kB' 'Buffers: 4304 kB' 'Cached: 17183696 kB' 'SwapCached: 0 kB' 'Active: 13823768 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647844 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689392 kB' 'Mapped: 194732 kB' 'Shmem: 11961292 kB' 'KReclaimable: 527704 kB' 'Slab: 1252364 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724660 kB' 'KernelStack: 16592 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13916220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205668 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.649 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.650 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37497564 kB' 'MemAvailable: 42604304 kB' 'Buffers: 4304 kB' 'Cached: 17183700 kB' 'SwapCached: 0 kB' 'Active: 13823072 kB' 'Inactive: 4050784 kB' 'Active(anon): 12647148 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 689104 kB' 'Mapped: 194644 kB' 'Shmem: 11961296 kB' 'KReclaimable: 527704 kB' 'Slab: 1252344 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724640 kB' 'KernelStack: 16592 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13916240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.651 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.652 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37497564 kB' 'MemAvailable: 42604304 kB' 'Buffers: 4304 kB' 'Cached: 17183700 kB' 'SwapCached: 0 kB' 'Active: 13822752 kB' 'Inactive: 4050784 kB' 'Active(anon): 12646828 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688784 kB' 'Mapped: 194644 kB' 'Shmem: 11961296 kB' 'KReclaimable: 527704 kB' 'Slab: 1252344 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724640 kB' 'KernelStack: 16576 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13916260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.653 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.654 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:05:59.655 nr_hugepages=1024 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:05:59.655 resv_hugepages=0 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:05:59.655 surplus_hugepages=0 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:05:59.655 anon_hugepages=0 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60287500 kB' 'MemFree: 37497948 kB' 'MemAvailable: 42604688 kB' 'Buffers: 4304 kB' 'Cached: 17183756 kB' 'SwapCached: 0 kB' 'Active: 13822712 kB' 'Inactive: 4050784 kB' 'Active(anon): 12646788 kB' 'Inactive(anon): 0 kB' 'Active(file): 1175924 kB' 'Inactive(file): 4050784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 688716 kB' 'Mapped: 194644 kB' 'Shmem: 11961352 kB' 'KReclaimable: 527704 kB' 'Slab: 1252344 kB' 'SReclaimable: 527704 kB' 'SUnreclaim: 724640 kB' 'KernelStack: 16576 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37483776 kB' 'Committed_AS: 13916284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205620 kB' 'VmallocChunk: 0 kB' 'Percpu: 87264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3464620 kB' 'DirectMap2M: 45494272 kB' 'DirectMap1G: 20971520 kB' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.655 20:25:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.655 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.655 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.655 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.656 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:59.657 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32631052 kB' 'MemFree: 20568680 kB' 'MemUsed: 12062372 kB' 'SwapCached: 0 kB' 'Active: 7982764 kB' 'Inactive: 217696 kB' 'Active(anon): 7294484 kB' 'Inactive(anon): 0 kB' 'Active(file): 688280 kB' 'Inactive(file): 217696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7857016 kB' 'Mapped: 112972 kB' 'AnonPages: 346620 kB' 'Shmem: 6951040 kB' 'KernelStack: 8328 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 279956 kB' 'Slab: 707016 kB' 'SReclaimable: 279956 kB' 'SUnreclaim: 427060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.658 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:05:59.659 node0=1024 expecting 1024 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:05:59.659 00:05:59.659 real 0m6.921s 00:05:59.659 user 0m2.563s 00:05:59.659 sys 0m4.409s 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.659 20:25:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:59.659 ************************************ 00:05:59.659 END TEST no_shrink_alloc 00:05:59.659 ************************************ 00:05:59.917 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:05:59.917 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:05:59.918 20:25:53 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:05:59.918 00:05:59.918 real 0m22.421s 00:05:59.918 user 0m8.515s 00:05:59.918 sys 0m14.191s 00:05:59.918 20:25:53 setup.sh.hugepages -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.918 20:25:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:59.918 ************************************ 00:05:59.918 END TEST hugepages 00:05:59.918 ************************************ 00:05:59.918 20:25:53 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:59.918 20:25:53 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.918 20:25:53 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.918 20:25:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:59.918 ************************************ 00:05:59.918 START TEST driver 00:05:59.918 ************************************ 00:05:59.918 20:25:53 setup.sh.driver -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:05:59.918 * Looking for test storage... 00:05:59.918 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:59.918 20:25:53 setup.sh.driver -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.918 20:25:53 setup.sh.driver -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.918 20:25:53 setup.sh.driver -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.176 20:25:53 setup.sh.driver -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.176 20:25:53 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:06:00.176 20:25:53 setup.sh.driver -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.176 20:25:53 setup.sh.driver -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.176 --rc genhtml_branch_coverage=1 00:06:00.176 --rc genhtml_function_coverage=1 00:06:00.176 --rc genhtml_legend=1 00:06:00.176 --rc geninfo_all_blocks=1 00:06:00.176 --rc geninfo_unexecuted_blocks=1 00:06:00.176 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.176 ' 00:06:00.176 20:25:53 setup.sh.driver -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.176 --rc genhtml_branch_coverage=1 00:06:00.176 --rc genhtml_function_coverage=1 00:06:00.176 --rc genhtml_legend=1 00:06:00.176 --rc geninfo_all_blocks=1 00:06:00.176 --rc geninfo_unexecuted_blocks=1 00:06:00.176 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.176 ' 00:06:00.176 20:25:53 setup.sh.driver -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.176 --rc genhtml_branch_coverage=1 00:06:00.176 --rc genhtml_function_coverage=1 00:06:00.176 --rc genhtml_legend=1 00:06:00.176 --rc geninfo_all_blocks=1 00:06:00.176 --rc geninfo_unexecuted_blocks=1 00:06:00.176 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.176 ' 00:06:00.176 20:25:53 setup.sh.driver -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.176 --rc genhtml_branch_coverage=1 00:06:00.176 --rc genhtml_function_coverage=1 00:06:00.176 --rc genhtml_legend=1 00:06:00.176 --rc geninfo_all_blocks=1 00:06:00.176 --rc geninfo_unexecuted_blocks=1 00:06:00.176 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.176 ' 00:06:00.176 20:25:53 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:00.176 20:25:53 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:00.176 20:25:53 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:05.448 20:25:58 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:05.448 20:25:58 setup.sh.driver -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.448 20:25:58 setup.sh.driver -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.448 20:25:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:05.448 ************************************ 00:06:05.448 START TEST guess_driver 00:06:05.448 ************************************ 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1129 -- # guess_driver 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:05.448 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 167 > 0 )) 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:05.449 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:05.449 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:05.449 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:05.449 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:05.449 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:05.449 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:05.449 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:05.449 Looking for driver=vfio-pci 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.449 20:25:58 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:08.897 20:26:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:14.169 00:06:14.169 real 0m8.765s 00:06:14.169 user 0m2.746s 00:06:14.169 sys 0m5.230s 00:06:14.169 20:26:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.169 20:26:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:14.169 ************************************ 00:06:14.169 END TEST guess_driver 00:06:14.169 ************************************ 00:06:14.169 00:06:14.169 real 0m13.948s 00:06:14.169 user 0m4.179s 00:06:14.169 sys 0m8.137s 00:06:14.169 20:26:07 setup.sh.driver -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.169 20:26:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:14.169 ************************************ 00:06:14.169 END TEST driver 00:06:14.169 ************************************ 00:06:14.169 20:26:07 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:06:14.169 20:26:07 setup.sh -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.169 20:26:07 setup.sh -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.169 20:26:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:14.169 ************************************ 00:06:14.169 START TEST devices 00:06:14.169 ************************************ 00:06:14.169 20:26:07 setup.sh.devices -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:06:14.169 * Looking for test storage... 00:06:14.169 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:14.169 20:26:07 setup.sh.devices -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.169 20:26:07 setup.sh.devices -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.169 20:26:07 setup.sh.devices -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.169 20:26:07 setup.sh.devices -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:06:14.169 20:26:07 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.170 20:26:07 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:06:14.170 20:26:07 setup.sh.devices -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.170 20:26:07 setup.sh.devices -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.170 --rc genhtml_branch_coverage=1 00:06:14.170 --rc genhtml_function_coverage=1 00:06:14.170 --rc genhtml_legend=1 00:06:14.170 --rc geninfo_all_blocks=1 00:06:14.170 --rc geninfo_unexecuted_blocks=1 00:06:14.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.170 ' 00:06:14.170 20:26:07 setup.sh.devices -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.170 --rc genhtml_branch_coverage=1 00:06:14.170 --rc genhtml_function_coverage=1 00:06:14.170 --rc genhtml_legend=1 00:06:14.170 --rc geninfo_all_blocks=1 00:06:14.170 --rc geninfo_unexecuted_blocks=1 00:06:14.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.170 ' 00:06:14.170 20:26:07 setup.sh.devices -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.170 --rc genhtml_branch_coverage=1 00:06:14.170 --rc genhtml_function_coverage=1 00:06:14.170 --rc genhtml_legend=1 00:06:14.170 --rc geninfo_all_blocks=1 00:06:14.170 --rc geninfo_unexecuted_blocks=1 00:06:14.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.170 ' 00:06:14.170 20:26:07 setup.sh.devices -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.170 --rc genhtml_branch_coverage=1 00:06:14.170 --rc genhtml_function_coverage=1 00:06:14.170 --rc genhtml_legend=1 00:06:14.170 --rc geninfo_all_blocks=1 00:06:14.170 --rc geninfo_unexecuted_blocks=1 00:06:14.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:14.170 ' 00:06:14.170 20:26:07 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:14.170 20:26:07 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:14.170 20:26:07 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:14.170 20:26:07 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:af:00.0 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # bdf=0000:b0:00.0 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:17.454 20:26:10 setup.sh.devices -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:17.454 No valid GPT data, bailing 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:17.454 20:26:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:17.454 20:26:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:17.454 20:26:10 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:af:00.0 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:06:17.454 No valid GPT data, bailing 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:17.454 20:26:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:17.454 20:26:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:17.454 20:26:10 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:af:00.0 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:b0:00.0 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:06:17.454 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme2n1 pt 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:06:17.454 No valid GPT data, bailing 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:06:17.454 20:26:10 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:06:17.455 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:06:17.455 20:26:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:06:17.455 20:26:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:06:17.455 20:26:10 setup.sh.devices -- setup/common.sh@80 -- # echo 375083606016 00:06:17.455 20:26:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:06:17.455 20:26:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:17.455 20:26:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:b0:00.0 00:06:17.455 20:26:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:06:17.455 20:26:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:17.455 20:26:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:17.455 20:26:10 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.455 20:26:10 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.455 20:26:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:17.713 ************************************ 00:06:17.713 START TEST nvme_mount 00:06:17.713 ************************************ 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1129 -- # nvme_mount 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:17.713 20:26:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:18.649 Creating new GPT entries in memory. 00:06:18.649 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:18.649 other utilities. 00:06:18.649 20:26:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:18.649 20:26:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:18.649 20:26:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:18.649 20:26:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:18.649 20:26:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:19.586 Creating new GPT entries in memory. 00:06:19.586 The operation has completed successfully. 00:06:19.586 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:19.586 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:19.586 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1814789 00:06:19.586 20:26:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.586 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:19.586 20:26:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.586 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:19.586 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:19.586 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:19.847 20:26:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:22.393 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.393 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:22.393 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:22.394 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.653 20:26:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.653 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:22.653 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:22.653 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:22.653 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:22.653 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:22.913 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:22.913 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:22.913 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:22.913 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:22.914 20:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.209 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:26.210 20:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:29.502 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:29.503 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:29.503 00:06:29.503 real 0m11.966s 00:06:29.503 user 0m3.291s 00:06:29.503 sys 0m6.449s 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.503 20:26:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:29.503 ************************************ 00:06:29.503 END TEST nvme_mount 00:06:29.503 ************************************ 00:06:29.503 20:26:22 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:29.503 20:26:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.503 20:26:22 setup.sh.devices -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.503 20:26:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:29.763 ************************************ 00:06:29.763 START TEST dm_mount 00:06:29.763 ************************************ 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1129 -- # dm_mount 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:29.763 20:26:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:30.701 Creating new GPT entries in memory. 00:06:30.701 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:30.701 other utilities. 00:06:30.701 20:26:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:30.701 20:26:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:30.701 20:26:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:30.701 20:26:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:30.701 20:26:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:31.643 Creating new GPT entries in memory. 00:06:31.643 The operation has completed successfully. 00:06:31.643 20:26:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:31.643 20:26:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:31.643 20:26:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:31.643 20:26:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:31.643 20:26:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:32.585 The operation has completed successfully. 00:06:32.585 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:32.585 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:32.585 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1818678 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:32.844 20:26:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:36.134 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:36.134 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:36.134 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:36.134 20:26:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:39.427 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:39.688 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:39.688 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:39.688 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:39.688 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:39.688 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:39.688 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:39.688 20:26:32 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:39.688 20:26:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:39.688 20:26:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:39.688 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:39.688 20:26:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:39.688 20:26:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:39.688 00:06:39.688 real 0m10.119s 00:06:39.688 user 0m2.427s 00:06:39.688 sys 0m4.755s 00:06:39.688 20:26:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.688 20:26:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:39.688 ************************************ 00:06:39.688 END TEST dm_mount 00:06:39.688 ************************************ 00:06:39.688 20:26:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:39.688 20:26:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:39.688 20:26:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:06:39.688 20:26:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:39.688 20:26:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:39.947 20:26:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:39.947 20:26:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:40.206 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:40.206 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:40.206 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:40.206 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:40.206 20:26:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:40.206 20:26:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:06:40.206 20:26:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:40.206 20:26:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:40.206 20:26:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:40.206 20:26:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:40.206 20:26:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:40.206 00:06:40.206 real 0m26.169s 00:06:40.206 user 0m7.039s 00:06:40.207 sys 0m13.660s 00:06:40.207 20:26:33 setup.sh.devices -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.207 20:26:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:40.207 ************************************ 00:06:40.207 END TEST devices 00:06:40.207 ************************************ 00:06:40.207 00:06:40.207 real 1m28.231s 00:06:40.207 user 0m27.590s 00:06:40.207 sys 0m51.444s 00:06:40.207 20:26:33 setup.sh -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.207 20:26:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:40.207 ************************************ 00:06:40.207 END TEST setup.sh 00:06:40.207 ************************************ 00:06:40.207 20:26:33 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:06:43.507 Hugepages 00:06:43.507 node hugesize free / total 00:06:43.507 node0 1048576kB 0 / 0 00:06:43.507 node0 2048kB 1024 / 1024 00:06:43.766 node1 1048576kB 0 / 0 00:06:43.766 node1 2048kB 1024 / 1024 00:06:43.766 00:06:43.766 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:43.766 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:43.766 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:43.766 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:43.766 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:43.766 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:43.766 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:43.766 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:43.766 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:43.766 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:43.766 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:43.766 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:43.766 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:43.766 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:43.766 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:43.766 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:43.766 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:43.766 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:44.024 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:06:44.024 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:06:44.024 20:26:37 -- spdk/autotest.sh@117 -- # uname -s 00:06:44.024 20:26:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:44.024 20:26:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:44.024 20:26:37 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:48.220 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:06:48.220 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:48.220 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:06:48.220 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:49.605 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:06:49.605 20:26:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:50.544 20:26:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:50.544 20:26:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:50.544 20:26:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:50.544 20:26:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:50.544 20:26:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:50.544 20:26:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:50.544 20:26:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:50.544 20:26:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:50.544 20:26:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:50.803 20:26:44 -- common/autotest_common.sh@1500 -- # (( 3 == 0 )) 00:06:50.803 20:26:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:06:50.803 20:26:44 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:54.095 Waiting for block devices as requested 00:06:54.095 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:06:54.353 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:06:54.353 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:54.353 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:54.612 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:54.612 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:54.612 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:54.870 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:54.870 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:54.870 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:54.870 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:06:55.129 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:55.129 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:55.388 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:55.388 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:55.388 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:55.651 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:55.651 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:55.651 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:55.908 20:26:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:55.908 20:26:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:55.908 20:26:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:06:55.908 20:26:49 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:06:55.908 20:26:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:06:55.909 20:26:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:55.909 20:26:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:55.909 20:26:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:55.909 20:26:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1543 -- # continue 00:06:55.909 20:26:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:55.909 20:26:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1487 -- # grep 0000:af:00.0/nvme/nvme 00:06:55.909 20:26:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:06:55.909 20:26:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:06:55.909 20:26:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:55.909 20:26:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:55.909 20:26:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x7' 00:06:55.909 20:26:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1534 -- # [[ 0 -ne 0 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:55.909 20:26:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1487 -- # grep 0000:b0:00.0/nvme/nvme 00:06:55.909 20:26:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:55.909 20:26:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x7' 00:06:55.909 20:26:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=0 00:06:55.909 20:26:49 -- common/autotest_common.sh@1534 -- # [[ 0 -ne 0 ]] 00:06:55.909 20:26:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:55.909 20:26:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.909 20:26:49 -- common/autotest_common.sh@10 -- # set +x 00:06:55.909 20:26:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:55.909 20:26:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.909 20:26:49 -- common/autotest_common.sh@10 -- # set +x 00:06:55.909 20:26:49 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:00.100 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:07:00.100 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:07:00.100 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:07:00.100 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:00.100 20:26:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:00.100 20:26:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.100 20:26:53 -- common/autotest_common.sh@10 -- # set +x 00:07:00.100 20:26:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:00.100 20:26:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:00.100 20:26:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:00.100 20:26:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:00.100 20:26:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:00.100 20:26:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:00.100 20:26:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:00.100 20:26:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:00.100 20:26:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:00.100 20:26:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:00.100 20:26:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:00.100 20:26:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:00.100 20:26:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:00.100 20:26:53 -- common/autotest_common.sh@1500 -- # (( 3 == 0 )) 00:07:00.100 20:26:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:07:00.100 20:26:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:00.100 20:26:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:07:00.100 20:26:53 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:07:00.100 20:26:53 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:07:00.100 20:26:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:00.100 20:26:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:af:00.0/device 00:07:00.100 20:26:53 -- common/autotest_common.sh@1566 -- # device=0x2701 00:07:00.100 20:26:53 -- common/autotest_common.sh@1567 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:07:00.100 20:26:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:00.100 20:26:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device 00:07:00.100 20:26:53 -- common/autotest_common.sh@1566 -- # device=0x2701 00:07:00.100 20:26:53 -- common/autotest_common.sh@1567 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:07:00.100 20:26:53 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:00.100 20:26:53 -- common/autotest_common.sh@1572 -- # return 0 00:07:00.101 20:26:53 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:00.101 20:26:53 -- common/autotest_common.sh@1580 -- # return 0 00:07:00.101 20:26:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:00.101 20:26:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:00.101 20:26:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:00.101 20:26:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:00.101 20:26:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:00.101 20:26:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.101 20:26:53 -- common/autotest_common.sh@10 -- # set +x 00:07:00.101 20:26:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:00.101 20:26:53 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:07:00.101 20:26:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.101 20:26:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.101 20:26:53 -- common/autotest_common.sh@10 -- # set +x 00:07:00.101 ************************************ 00:07:00.101 START TEST env 00:07:00.101 ************************************ 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:07:00.101 * Looking for test storage... 00:07:00.101 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.101 20:26:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.101 20:26:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.101 20:26:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.101 20:26:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.101 20:26:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.101 20:26:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.101 20:26:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.101 20:26:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.101 20:26:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.101 20:26:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.101 20:26:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.101 20:26:53 env -- scripts/common.sh@344 -- # case "$op" in 00:07:00.101 20:26:53 env -- scripts/common.sh@345 -- # : 1 00:07:00.101 20:26:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.101 20:26:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.101 20:26:53 env -- scripts/common.sh@365 -- # decimal 1 00:07:00.101 20:26:53 env -- scripts/common.sh@353 -- # local d=1 00:07:00.101 20:26:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.101 20:26:53 env -- scripts/common.sh@355 -- # echo 1 00:07:00.101 20:26:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.101 20:26:53 env -- scripts/common.sh@366 -- # decimal 2 00:07:00.101 20:26:53 env -- scripts/common.sh@353 -- # local d=2 00:07:00.101 20:26:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.101 20:26:53 env -- scripts/common.sh@355 -- # echo 2 00:07:00.101 20:26:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.101 20:26:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.101 20:26:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.101 20:26:53 env -- scripts/common.sh@368 -- # return 0 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.101 --rc genhtml_branch_coverage=1 00:07:00.101 --rc genhtml_function_coverage=1 00:07:00.101 --rc genhtml_legend=1 00:07:00.101 --rc geninfo_all_blocks=1 00:07:00.101 --rc geninfo_unexecuted_blocks=1 00:07:00.101 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:00.101 ' 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.101 --rc genhtml_branch_coverage=1 00:07:00.101 --rc genhtml_function_coverage=1 00:07:00.101 --rc genhtml_legend=1 00:07:00.101 --rc geninfo_all_blocks=1 00:07:00.101 --rc geninfo_unexecuted_blocks=1 00:07:00.101 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:00.101 ' 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.101 --rc genhtml_branch_coverage=1 00:07:00.101 --rc genhtml_function_coverage=1 00:07:00.101 --rc genhtml_legend=1 00:07:00.101 --rc geninfo_all_blocks=1 00:07:00.101 --rc geninfo_unexecuted_blocks=1 00:07:00.101 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:00.101 ' 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.101 --rc genhtml_branch_coverage=1 00:07:00.101 --rc genhtml_function_coverage=1 00:07:00.101 --rc genhtml_legend=1 00:07:00.101 --rc geninfo_all_blocks=1 00:07:00.101 --rc geninfo_unexecuted_blocks=1 00:07:00.101 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:00.101 ' 00:07:00.101 20:26:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.101 20:26:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.101 20:26:53 env -- common/autotest_common.sh@10 -- # set +x 00:07:00.101 ************************************ 00:07:00.101 START TEST env_memory 00:07:00.101 ************************************ 00:07:00.101 20:26:53 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:07:00.101 00:07:00.101 00:07:00.101 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.101 http://cunit.sourceforge.net/ 00:07:00.101 00:07:00.101 00:07:00.101 Suite: memory 00:07:00.361 Test: alloc and free memory map ...[2024-12-05 20:26:53.556506] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:00.361 passed 00:07:00.361 Test: mem map translation ...[2024-12-05 20:26:53.570201] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:00.361 [2024-12-05 20:26:53.570222] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:00.361 [2024-12-05 20:26:53.570269] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:00.361 [2024-12-05 20:26:53.570278] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:00.361 passed 00:07:00.361 Test: mem map registration ...[2024-12-05 20:26:53.590674] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:00.361 [2024-12-05 20:26:53.590692] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:00.361 passed 00:07:00.361 Test: mem map adjacent registrations ...passed 00:07:00.361 00:07:00.361 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.361 suites 1 1 n/a 0 0 00:07:00.361 tests 4 4 4 0 0 00:07:00.361 asserts 152 152 152 0 n/a 00:07:00.361 00:07:00.361 Elapsed time = 0.087 seconds 00:07:00.361 00:07:00.361 real 0m0.101s 00:07:00.361 user 0m0.090s 00:07:00.361 sys 0m0.010s 00:07:00.361 20:26:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.361 20:26:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:00.361 ************************************ 00:07:00.361 END TEST env_memory 00:07:00.361 ************************************ 00:07:00.362 20:26:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:00.362 20:26:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.362 20:26:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.362 20:26:53 env -- common/autotest_common.sh@10 -- # set +x 00:07:00.362 ************************************ 00:07:00.362 START TEST env_vtophys 00:07:00.362 ************************************ 00:07:00.362 20:26:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:00.362 EAL: lib.eal log level changed from notice to debug 00:07:00.362 EAL: Detected lcore 0 as core 0 on socket 0 00:07:00.362 EAL: Detected lcore 1 as core 1 on socket 0 00:07:00.362 EAL: Detected lcore 2 as core 2 on socket 0 00:07:00.362 EAL: Detected lcore 3 as core 3 on socket 0 00:07:00.362 EAL: Detected lcore 4 as core 4 on socket 0 00:07:00.362 EAL: Detected lcore 5 as core 8 on socket 0 00:07:00.362 EAL: Detected lcore 6 as core 9 on socket 0 00:07:00.362 EAL: Detected lcore 7 as core 10 on socket 0 00:07:00.362 EAL: Detected lcore 8 as core 11 on socket 0 00:07:00.362 EAL: Detected lcore 9 as core 16 on socket 0 00:07:00.362 EAL: Detected lcore 10 as core 17 on socket 0 00:07:00.362 EAL: Detected lcore 11 as core 18 on socket 0 00:07:00.362 EAL: Detected lcore 12 as core 19 on socket 0 00:07:00.362 EAL: Detected lcore 13 as core 20 on socket 0 00:07:00.362 EAL: Detected lcore 14 as core 24 on socket 0 00:07:00.362 EAL: Detected lcore 15 as core 25 on socket 0 00:07:00.362 EAL: Detected lcore 16 as core 26 on socket 0 00:07:00.362 EAL: Detected lcore 17 as core 27 on socket 0 00:07:00.362 EAL: Detected lcore 18 as core 0 on socket 1 00:07:00.362 EAL: Detected lcore 19 as core 1 on socket 1 00:07:00.362 EAL: Detected lcore 20 as core 2 on socket 1 00:07:00.362 EAL: Detected lcore 21 as core 3 on socket 1 00:07:00.362 EAL: Detected lcore 22 as core 4 on socket 1 00:07:00.362 EAL: Detected lcore 23 as core 8 on socket 1 00:07:00.362 EAL: Detected lcore 24 as core 9 on socket 1 00:07:00.362 EAL: Detected lcore 25 as core 10 on socket 1 00:07:00.362 EAL: Detected lcore 26 as core 11 on socket 1 00:07:00.362 EAL: Detected lcore 27 as core 16 on socket 1 00:07:00.362 EAL: Detected lcore 28 as core 17 on socket 1 00:07:00.362 EAL: Detected lcore 29 as core 18 on socket 1 00:07:00.362 EAL: Detected lcore 30 as core 19 on socket 1 00:07:00.362 EAL: Detected lcore 31 as core 20 on socket 1 00:07:00.362 EAL: Detected lcore 32 as core 24 on socket 1 00:07:00.362 EAL: Detected lcore 33 as core 25 on socket 1 00:07:00.362 EAL: Detected lcore 34 as core 26 on socket 1 00:07:00.362 EAL: Detected lcore 35 as core 27 on socket 1 00:07:00.362 EAL: Detected lcore 36 as core 0 on socket 0 00:07:00.362 EAL: Detected lcore 37 as core 1 on socket 0 00:07:00.362 EAL: Detected lcore 38 as core 2 on socket 0 00:07:00.362 EAL: Detected lcore 39 as core 3 on socket 0 00:07:00.362 EAL: Detected lcore 40 as core 4 on socket 0 00:07:00.362 EAL: Detected lcore 41 as core 8 on socket 0 00:07:00.362 EAL: Detected lcore 42 as core 9 on socket 0 00:07:00.362 EAL: Detected lcore 43 as core 10 on socket 0 00:07:00.362 EAL: Detected lcore 44 as core 11 on socket 0 00:07:00.362 EAL: Detected lcore 45 as core 16 on socket 0 00:07:00.362 EAL: Detected lcore 46 as core 17 on socket 0 00:07:00.362 EAL: Detected lcore 47 as core 18 on socket 0 00:07:00.362 EAL: Detected lcore 48 as core 19 on socket 0 00:07:00.362 EAL: Detected lcore 49 as core 20 on socket 0 00:07:00.362 EAL: Detected lcore 50 as core 24 on socket 0 00:07:00.362 EAL: Detected lcore 51 as core 25 on socket 0 00:07:00.362 EAL: Detected lcore 52 as core 26 on socket 0 00:07:00.362 EAL: Detected lcore 53 as core 27 on socket 0 00:07:00.362 EAL: Detected lcore 54 as core 0 on socket 1 00:07:00.362 EAL: Detected lcore 55 as core 1 on socket 1 00:07:00.362 EAL: Detected lcore 56 as core 2 on socket 1 00:07:00.362 EAL: Detected lcore 57 as core 3 on socket 1 00:07:00.362 EAL: Detected lcore 58 as core 4 on socket 1 00:07:00.362 EAL: Detected lcore 59 as core 8 on socket 1 00:07:00.362 EAL: Detected lcore 60 as core 9 on socket 1 00:07:00.362 EAL: Detected lcore 61 as core 10 on socket 1 00:07:00.362 EAL: Detected lcore 62 as core 11 on socket 1 00:07:00.362 EAL: Detected lcore 63 as core 16 on socket 1 00:07:00.362 EAL: Detected lcore 64 as core 17 on socket 1 00:07:00.362 EAL: Detected lcore 65 as core 18 on socket 1 00:07:00.362 EAL: Detected lcore 66 as core 19 on socket 1 00:07:00.362 EAL: Detected lcore 67 as core 20 on socket 1 00:07:00.362 EAL: Detected lcore 68 as core 24 on socket 1 00:07:00.362 EAL: Detected lcore 69 as core 25 on socket 1 00:07:00.362 EAL: Detected lcore 70 as core 26 on socket 1 00:07:00.362 EAL: Detected lcore 71 as core 27 on socket 1 00:07:00.362 EAL: Maximum logical cores by configuration: 128 00:07:00.362 EAL: Detected CPU lcores: 72 00:07:00.362 EAL: Detected NUMA nodes: 2 00:07:00.362 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:00.362 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:00.362 EAL: Checking presence of .so 'librte_eal.so' 00:07:00.362 EAL: Detected static linkage of DPDK 00:07:00.362 EAL: No shared files mode enabled, IPC will be disabled 00:07:00.362 EAL: Bus pci wants IOVA as 'DC' 00:07:00.362 EAL: Buses did not request a specific IOVA mode. 00:07:00.362 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:00.362 EAL: Selected IOVA mode 'VA' 00:07:00.362 EAL: Probing VFIO support... 00:07:00.362 EAL: IOMMU type 1 (Type 1) is supported 00:07:00.362 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:00.362 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:00.362 EAL: VFIO support initialized 00:07:00.362 EAL: Ask a virtual area of 0x2e000 bytes 00:07:00.362 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:00.362 EAL: Setting up physically contiguous memory... 00:07:00.362 EAL: Setting maximum number of open files to 524288 00:07:00.362 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:00.362 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:00.362 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:00.362 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:00.362 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.362 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:00.362 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.362 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.362 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:00.362 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:00.362 EAL: Hugepages will be freed exactly as allocated. 00:07:00.362 EAL: No shared files mode enabled, IPC is disabled 00:07:00.362 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: TSC frequency is ~2300000 KHz 00:07:00.363 EAL: Main lcore 0 is ready (tid=7fb1a4de1a00;cpuset=[0]) 00:07:00.363 EAL: Trying to obtain current memory policy. 00:07:00.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.363 EAL: Restoring previous memory policy: 0 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was expanded by 2MB 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Mem event callback 'spdk:(nil)' registered 00:07:00.363 00:07:00.363 00:07:00.363 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.363 http://cunit.sourceforge.net/ 00:07:00.363 00:07:00.363 00:07:00.363 Suite: components_suite 00:07:00.363 Test: vtophys_malloc_test ...passed 00:07:00.363 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:00.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.363 EAL: Restoring previous memory policy: 4 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was expanded by 4MB 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was shrunk by 4MB 00:07:00.363 EAL: Trying to obtain current memory policy. 00:07:00.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.363 EAL: Restoring previous memory policy: 4 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was expanded by 6MB 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was shrunk by 6MB 00:07:00.363 EAL: Trying to obtain current memory policy. 00:07:00.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.363 EAL: Restoring previous memory policy: 4 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was expanded by 10MB 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was shrunk by 10MB 00:07:00.363 EAL: Trying to obtain current memory policy. 00:07:00.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.363 EAL: Restoring previous memory policy: 4 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was expanded by 18MB 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was shrunk by 18MB 00:07:00.363 EAL: Trying to obtain current memory policy. 00:07:00.363 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.363 EAL: Restoring previous memory policy: 4 00:07:00.363 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.363 EAL: request: mp_malloc_sync 00:07:00.363 EAL: No shared files mode enabled, IPC is disabled 00:07:00.363 EAL: Heap on socket 0 was expanded by 34MB 00:07:00.623 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.623 EAL: request: mp_malloc_sync 00:07:00.623 EAL: No shared files mode enabled, IPC is disabled 00:07:00.623 EAL: Heap on socket 0 was shrunk by 34MB 00:07:00.623 EAL: Trying to obtain current memory policy. 00:07:00.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.623 EAL: Restoring previous memory policy: 4 00:07:00.623 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.623 EAL: request: mp_malloc_sync 00:07:00.623 EAL: No shared files mode enabled, IPC is disabled 00:07:00.623 EAL: Heap on socket 0 was expanded by 66MB 00:07:00.623 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.623 EAL: request: mp_malloc_sync 00:07:00.623 EAL: No shared files mode enabled, IPC is disabled 00:07:00.623 EAL: Heap on socket 0 was shrunk by 66MB 00:07:00.623 EAL: Trying to obtain current memory policy. 00:07:00.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.623 EAL: Restoring previous memory policy: 4 00:07:00.623 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.623 EAL: request: mp_malloc_sync 00:07:00.623 EAL: No shared files mode enabled, IPC is disabled 00:07:00.623 EAL: Heap on socket 0 was expanded by 130MB 00:07:00.623 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.623 EAL: request: mp_malloc_sync 00:07:00.623 EAL: No shared files mode enabled, IPC is disabled 00:07:00.623 EAL: Heap on socket 0 was shrunk by 130MB 00:07:00.623 EAL: Trying to obtain current memory policy. 00:07:00.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.623 EAL: Restoring previous memory policy: 4 00:07:00.623 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.623 EAL: request: mp_malloc_sync 00:07:00.623 EAL: No shared files mode enabled, IPC is disabled 00:07:00.623 EAL: Heap on socket 0 was expanded by 258MB 00:07:00.623 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.623 EAL: request: mp_malloc_sync 00:07:00.623 EAL: No shared files mode enabled, IPC is disabled 00:07:00.623 EAL: Heap on socket 0 was shrunk by 258MB 00:07:00.623 EAL: Trying to obtain current memory policy. 00:07:00.623 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.882 EAL: Restoring previous memory policy: 4 00:07:00.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.882 EAL: request: mp_malloc_sync 00:07:00.882 EAL: No shared files mode enabled, IPC is disabled 00:07:00.882 EAL: Heap on socket 0 was expanded by 514MB 00:07:00.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.141 EAL: request: mp_malloc_sync 00:07:01.141 EAL: No shared files mode enabled, IPC is disabled 00:07:01.141 EAL: Heap on socket 0 was shrunk by 514MB 00:07:01.141 EAL: Trying to obtain current memory policy. 00:07:01.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.400 EAL: Restoring previous memory policy: 4 00:07:01.400 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.400 EAL: request: mp_malloc_sync 00:07:01.400 EAL: No shared files mode enabled, IPC is disabled 00:07:01.400 EAL: Heap on socket 0 was expanded by 1026MB 00:07:01.400 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.660 EAL: request: mp_malloc_sync 00:07:01.660 EAL: No shared files mode enabled, IPC is disabled 00:07:01.660 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:01.660 passed 00:07:01.660 00:07:01.660 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.660 suites 1 1 n/a 0 0 00:07:01.660 tests 2 2 2 0 0 00:07:01.660 asserts 497 497 497 0 n/a 00:07:01.660 00:07:01.660 Elapsed time = 1.126 seconds 00:07:01.660 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.660 EAL: request: mp_malloc_sync 00:07:01.660 EAL: No shared files mode enabled, IPC is disabled 00:07:01.660 EAL: Heap on socket 0 was shrunk by 2MB 00:07:01.660 EAL: No shared files mode enabled, IPC is disabled 00:07:01.660 EAL: No shared files mode enabled, IPC is disabled 00:07:01.660 EAL: No shared files mode enabled, IPC is disabled 00:07:01.660 00:07:01.660 real 0m1.252s 00:07:01.660 user 0m0.709s 00:07:01.660 sys 0m0.510s 00:07:01.660 20:26:54 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.660 20:26:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:01.660 ************************************ 00:07:01.660 END TEST env_vtophys 00:07:01.660 ************************************ 00:07:01.660 20:26:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:07:01.660 20:26:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.660 20:26:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.660 20:26:54 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.660 ************************************ 00:07:01.660 START TEST env_pci 00:07:01.660 ************************************ 00:07:01.660 20:26:55 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:07:01.660 00:07:01.660 00:07:01.660 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.660 http://cunit.sourceforge.net/ 00:07:01.660 00:07:01.660 00:07:01.660 Suite: pci 00:07:01.660 Test: pci_hook ...[2024-12-05 20:26:55.018722] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1828022 has claimed it 00:07:01.660 EAL: Cannot find device (10000:00:01.0) 00:07:01.660 EAL: Failed to attach device on primary process 00:07:01.660 passed 00:07:01.660 00:07:01.660 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.660 suites 1 1 n/a 0 0 00:07:01.660 tests 1 1 1 0 0 00:07:01.660 asserts 25 25 25 0 n/a 00:07:01.660 00:07:01.660 Elapsed time = 0.031 seconds 00:07:01.660 00:07:01.660 real 0m0.045s 00:07:01.660 user 0m0.013s 00:07:01.660 sys 0m0.032s 00:07:01.660 20:26:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.660 20:26:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:01.660 ************************************ 00:07:01.660 END TEST env_pci 00:07:01.660 ************************************ 00:07:01.660 20:26:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:01.660 20:26:55 env -- env/env.sh@15 -- # uname 00:07:01.660 20:26:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:01.660 20:26:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:01.660 20:26:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.660 20:26:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.660 20:26:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.660 20:26:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.925 ************************************ 00:07:01.925 START TEST env_dpdk_post_init 00:07:01.925 ************************************ 00:07:01.925 20:26:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.925 EAL: Detected CPU lcores: 72 00:07:01.925 EAL: Detected NUMA nodes: 2 00:07:01.925 EAL: Detected static linkage of DPDK 00:07:01.925 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:01.925 EAL: Selected IOVA mode 'VA' 00:07:01.925 EAL: VFIO support initialized 00:07:01.925 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:01.925 EAL: Using IOMMU type 1 (Type 1) 00:07:02.184 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:07:02.443 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1) 00:07:02.701 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1) 00:07:02.701 EAL: Releasing PCI mapped resource for 0000:af:00.0 00:07:02.701 EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001004000 00:07:02.701 EAL: Releasing PCI mapped resource for 0000:b0:00.0 00:07:02.701 EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001008000 00:07:02.959 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:07:02.959 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:07:02.959 Starting DPDK initialization... 00:07:02.959 Starting SPDK post initialization... 00:07:02.959 SPDK NVMe probe 00:07:02.959 Attaching to 0000:5e:00.0 00:07:02.959 Attaching to 0000:af:00.0 00:07:02.959 Attaching to 0000:b0:00.0 00:07:02.959 Attached to 0000:af:00.0 00:07:02.959 Attached to 0000:b0:00.0 00:07:02.959 Attached to 0000:5e:00.0 00:07:02.959 Cleaning up... 00:07:02.959 00:07:02.959 real 0m1.132s 00:07:02.959 user 0m0.150s 00:07:02.959 sys 0m0.302s 00:07:02.959 20:26:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.959 20:26:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.959 ************************************ 00:07:02.959 END TEST env_dpdk_post_init 00:07:02.959 ************************************ 00:07:02.959 20:26:56 env -- env/env.sh@26 -- # uname 00:07:02.959 20:26:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:02.959 20:26:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:02.959 20:26:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.959 20:26:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.959 20:26:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:02.959 ************************************ 00:07:02.959 START TEST env_mem_callbacks 00:07:02.959 ************************************ 00:07:02.959 20:26:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:02.959 EAL: Detected CPU lcores: 72 00:07:02.959 EAL: Detected NUMA nodes: 2 00:07:02.959 EAL: Detected static linkage of DPDK 00:07:02.959 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:02.959 EAL: Selected IOVA mode 'VA' 00:07:02.959 EAL: VFIO support initialized 00:07:03.217 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:03.217 00:07:03.217 00:07:03.217 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.217 http://cunit.sourceforge.net/ 00:07:03.217 00:07:03.217 00:07:03.217 Suite: memory 00:07:03.217 Test: test ... 00:07:03.217 register 0x200000200000 2097152 00:07:03.217 malloc 3145728 00:07:03.217 register 0x200000400000 4194304 00:07:03.217 buf 0x200000500000 len 3145728 PASSED 00:07:03.217 malloc 64 00:07:03.217 buf 0x2000004fff40 len 64 PASSED 00:07:03.217 malloc 4194304 00:07:03.217 register 0x200000800000 6291456 00:07:03.217 buf 0x200000a00000 len 4194304 PASSED 00:07:03.217 free 0x200000500000 3145728 00:07:03.217 free 0x2000004fff40 64 00:07:03.217 unregister 0x200000400000 4194304 PASSED 00:07:03.217 free 0x200000a00000 4194304 00:07:03.217 unregister 0x200000800000 6291456 PASSED 00:07:03.217 malloc 8388608 00:07:03.217 register 0x200000400000 10485760 00:07:03.217 buf 0x200000600000 len 8388608 PASSED 00:07:03.217 free 0x200000600000 8388608 00:07:03.217 unregister 0x200000400000 10485760 PASSED 00:07:03.217 passed 00:07:03.217 00:07:03.217 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.217 suites 1 1 n/a 0 0 00:07:03.217 tests 1 1 1 0 0 00:07:03.217 asserts 15 15 15 0 n/a 00:07:03.217 00:07:03.217 Elapsed time = 0.005 seconds 00:07:03.217 00:07:03.217 real 0m0.068s 00:07:03.217 user 0m0.017s 00:07:03.217 sys 0m0.050s 00:07:03.217 20:26:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.217 20:26:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:03.217 ************************************ 00:07:03.217 END TEST env_mem_callbacks 00:07:03.217 ************************************ 00:07:03.217 00:07:03.217 real 0m3.155s 00:07:03.217 user 0m1.224s 00:07:03.217 sys 0m1.255s 00:07:03.217 20:26:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.217 20:26:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.218 ************************************ 00:07:03.218 END TEST env 00:07:03.218 ************************************ 00:07:03.218 20:26:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:07:03.218 20:26:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.218 20:26:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.218 20:26:56 -- common/autotest_common.sh@10 -- # set +x 00:07:03.218 ************************************ 00:07:03.218 START TEST rpc 00:07:03.218 ************************************ 00:07:03.218 20:26:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:07:03.218 * Looking for test storage... 00:07:03.218 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:03.218 20:26:56 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:03.218 20:26:56 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:03.218 20:26:56 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:03.477 20:26:56 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:03.477 20:26:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.477 20:26:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.477 20:26:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.477 20:26:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.477 20:26:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.477 20:26:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.477 20:26:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.477 20:26:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.477 20:26:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.477 20:26:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.477 20:26:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.477 20:26:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:03.477 20:26:56 rpc -- scripts/common.sh@345 -- # : 1 00:07:03.477 20:26:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.477 20:26:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.477 20:26:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:03.477 20:26:56 rpc -- scripts/common.sh@353 -- # local d=1 00:07:03.478 20:26:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.478 20:26:56 rpc -- scripts/common.sh@355 -- # echo 1 00:07:03.478 20:26:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.478 20:26:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:03.478 20:26:56 rpc -- scripts/common.sh@353 -- # local d=2 00:07:03.478 20:26:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.478 20:26:56 rpc -- scripts/common.sh@355 -- # echo 2 00:07:03.478 20:26:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.478 20:26:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.478 20:26:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.478 20:26:56 rpc -- scripts/common.sh@368 -- # return 0 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.478 --rc genhtml_branch_coverage=1 00:07:03.478 --rc genhtml_function_coverage=1 00:07:03.478 --rc genhtml_legend=1 00:07:03.478 --rc geninfo_all_blocks=1 00:07:03.478 --rc geninfo_unexecuted_blocks=1 00:07:03.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.478 ' 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.478 --rc genhtml_branch_coverage=1 00:07:03.478 --rc genhtml_function_coverage=1 00:07:03.478 --rc genhtml_legend=1 00:07:03.478 --rc geninfo_all_blocks=1 00:07:03.478 --rc geninfo_unexecuted_blocks=1 00:07:03.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.478 ' 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.478 --rc genhtml_branch_coverage=1 00:07:03.478 --rc genhtml_function_coverage=1 00:07:03.478 --rc genhtml_legend=1 00:07:03.478 --rc geninfo_all_blocks=1 00:07:03.478 --rc geninfo_unexecuted_blocks=1 00:07:03.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.478 ' 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:03.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.478 --rc genhtml_branch_coverage=1 00:07:03.478 --rc genhtml_function_coverage=1 00:07:03.478 --rc genhtml_legend=1 00:07:03.478 --rc geninfo_all_blocks=1 00:07:03.478 --rc geninfo_unexecuted_blocks=1 00:07:03.478 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:03.478 ' 00:07:03.478 20:26:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1828363 00:07:03.478 20:26:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.478 20:26:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:03.478 20:26:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1828363 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 1828363 ']' 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.478 20:26:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.478 [2024-12-05 20:26:56.757122] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:03.478 [2024-12-05 20:26:56.757192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828363 ] 00:07:03.478 [2024-12-05 20:26:56.834382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.478 [2024-12-05 20:26:56.882500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:03.478 [2024-12-05 20:26:56.882549] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1828363' to capture a snapshot of events at runtime. 00:07:03.478 [2024-12-05 20:26:56.882559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.478 [2024-12-05 20:26:56.882568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.478 [2024-12-05 20:26:56.882575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1828363 for offline analysis/debug. 00:07:03.478 [2024-12-05 20:26:56.883068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.737 20:26:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.737 20:26:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.737 20:26:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:03.737 20:26:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:03.737 20:26:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:03.737 20:26:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:03.737 20:26:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.737 20:26:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.737 20:26:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.737 ************************************ 00:07:03.737 START TEST rpc_integrity 00:07:03.737 ************************************ 00:07:03.737 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:03.737 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:03.737 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.738 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.738 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.738 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:03.738 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:03.996 { 00:07:03.996 "name": "Malloc0", 00:07:03.996 "aliases": [ 00:07:03.996 "3101da74-f961-437e-8d3f-7a6dc9f7f08c" 00:07:03.996 ], 00:07:03.996 "product_name": "Malloc disk", 00:07:03.996 "block_size": 512, 00:07:03.996 "num_blocks": 16384, 00:07:03.996 "uuid": "3101da74-f961-437e-8d3f-7a6dc9f7f08c", 00:07:03.996 "assigned_rate_limits": { 00:07:03.996 "rw_ios_per_sec": 0, 00:07:03.996 "rw_mbytes_per_sec": 0, 00:07:03.996 "r_mbytes_per_sec": 0, 00:07:03.996 "w_mbytes_per_sec": 0 00:07:03.996 }, 00:07:03.996 "claimed": false, 00:07:03.996 "zoned": false, 00:07:03.996 "supported_io_types": { 00:07:03.996 "read": true, 00:07:03.996 "write": true, 00:07:03.996 "unmap": true, 00:07:03.996 "flush": true, 00:07:03.996 "reset": true, 00:07:03.996 "nvme_admin": false, 00:07:03.996 "nvme_io": false, 00:07:03.996 "nvme_io_md": false, 00:07:03.996 "write_zeroes": true, 00:07:03.996 "zcopy": true, 00:07:03.996 "get_zone_info": false, 00:07:03.996 "zone_management": false, 00:07:03.996 "zone_append": false, 00:07:03.996 "compare": false, 00:07:03.996 "compare_and_write": false, 00:07:03.996 "abort": true, 00:07:03.996 "seek_hole": false, 00:07:03.996 "seek_data": false, 00:07:03.996 "copy": true, 00:07:03.996 "nvme_iov_md": false 00:07:03.996 }, 00:07:03.996 "memory_domains": [ 00:07:03.996 { 00:07:03.996 "dma_device_id": "system", 00:07:03.996 "dma_device_type": 1 00:07:03.996 }, 00:07:03.996 { 00:07:03.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.996 "dma_device_type": 2 00:07:03.996 } 00:07:03.996 ], 00:07:03.996 "driver_specific": {} 00:07:03.996 } 00:07:03.996 ]' 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:03.996 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.996 [2024-12-05 20:26:57.262874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:03.996 [2024-12-05 20:26:57.262910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.996 [2024-12-05 20:26:57.262929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5f81cc0 00:07:03.996 [2024-12-05 20:26:57.262938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.996 [2024-12-05 20:26:57.263913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.996 [2024-12-05 20:26:57.263938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:03.996 Passthru0 00:07:03.996 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:03.997 { 00:07:03.997 "name": "Malloc0", 00:07:03.997 "aliases": [ 00:07:03.997 "3101da74-f961-437e-8d3f-7a6dc9f7f08c" 00:07:03.997 ], 00:07:03.997 "product_name": "Malloc disk", 00:07:03.997 "block_size": 512, 00:07:03.997 "num_blocks": 16384, 00:07:03.997 "uuid": "3101da74-f961-437e-8d3f-7a6dc9f7f08c", 00:07:03.997 "assigned_rate_limits": { 00:07:03.997 "rw_ios_per_sec": 0, 00:07:03.997 "rw_mbytes_per_sec": 0, 00:07:03.997 "r_mbytes_per_sec": 0, 00:07:03.997 "w_mbytes_per_sec": 0 00:07:03.997 }, 00:07:03.997 "claimed": true, 00:07:03.997 "claim_type": "exclusive_write", 00:07:03.997 "zoned": false, 00:07:03.997 "supported_io_types": { 00:07:03.997 "read": true, 00:07:03.997 "write": true, 00:07:03.997 "unmap": true, 00:07:03.997 "flush": true, 00:07:03.997 "reset": true, 00:07:03.997 "nvme_admin": false, 00:07:03.997 "nvme_io": false, 00:07:03.997 "nvme_io_md": false, 00:07:03.997 "write_zeroes": true, 00:07:03.997 "zcopy": true, 00:07:03.997 "get_zone_info": false, 00:07:03.997 "zone_management": false, 00:07:03.997 "zone_append": false, 00:07:03.997 "compare": false, 00:07:03.997 "compare_and_write": false, 00:07:03.997 "abort": true, 00:07:03.997 "seek_hole": false, 00:07:03.997 "seek_data": false, 00:07:03.997 "copy": true, 00:07:03.997 "nvme_iov_md": false 00:07:03.997 }, 00:07:03.997 "memory_domains": [ 00:07:03.997 { 00:07:03.997 "dma_device_id": "system", 00:07:03.997 "dma_device_type": 1 00:07:03.997 }, 00:07:03.997 { 00:07:03.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.997 "dma_device_type": 2 00:07:03.997 } 00:07:03.997 ], 00:07:03.997 "driver_specific": {} 00:07:03.997 }, 00:07:03.997 { 00:07:03.997 "name": "Passthru0", 00:07:03.997 "aliases": [ 00:07:03.997 "c66947a3-885d-508b-a9f2-79a0da720d25" 00:07:03.997 ], 00:07:03.997 "product_name": "passthru", 00:07:03.997 "block_size": 512, 00:07:03.997 "num_blocks": 16384, 00:07:03.997 "uuid": "c66947a3-885d-508b-a9f2-79a0da720d25", 00:07:03.997 "assigned_rate_limits": { 00:07:03.997 "rw_ios_per_sec": 0, 00:07:03.997 "rw_mbytes_per_sec": 0, 00:07:03.997 "r_mbytes_per_sec": 0, 00:07:03.997 "w_mbytes_per_sec": 0 00:07:03.997 }, 00:07:03.997 "claimed": false, 00:07:03.997 "zoned": false, 00:07:03.997 "supported_io_types": { 00:07:03.997 "read": true, 00:07:03.997 "write": true, 00:07:03.997 "unmap": true, 00:07:03.997 "flush": true, 00:07:03.997 "reset": true, 00:07:03.997 "nvme_admin": false, 00:07:03.997 "nvme_io": false, 00:07:03.997 "nvme_io_md": false, 00:07:03.997 "write_zeroes": true, 00:07:03.997 "zcopy": true, 00:07:03.997 "get_zone_info": false, 00:07:03.997 "zone_management": false, 00:07:03.997 "zone_append": false, 00:07:03.997 "compare": false, 00:07:03.997 "compare_and_write": false, 00:07:03.997 "abort": true, 00:07:03.997 "seek_hole": false, 00:07:03.997 "seek_data": false, 00:07:03.997 "copy": true, 00:07:03.997 "nvme_iov_md": false 00:07:03.997 }, 00:07:03.997 "memory_domains": [ 00:07:03.997 { 00:07:03.997 "dma_device_id": "system", 00:07:03.997 "dma_device_type": 1 00:07:03.997 }, 00:07:03.997 { 00:07:03.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.997 "dma_device_type": 2 00:07:03.997 } 00:07:03.997 ], 00:07:03.997 "driver_specific": { 00:07:03.997 "passthru": { 00:07:03.997 "name": "Passthru0", 00:07:03.997 "base_bdev_name": "Malloc0" 00:07:03.997 } 00:07:03.997 } 00:07:03.997 } 00:07:03.997 ]' 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:03.997 20:26:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:03.997 00:07:03.997 real 0m0.266s 00:07:03.997 user 0m0.167s 00:07:03.997 sys 0m0.048s 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.997 20:26:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.997 ************************************ 00:07:03.997 END TEST rpc_integrity 00:07:03.997 ************************************ 00:07:04.256 20:26:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:04.256 20:26:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.256 20:26:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.256 20:26:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.256 ************************************ 00:07:04.256 START TEST rpc_plugins 00:07:04.256 ************************************ 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:04.256 { 00:07:04.256 "name": "Malloc1", 00:07:04.256 "aliases": [ 00:07:04.256 "6738926a-c359-4aa4-b8ec-46a9265e5eea" 00:07:04.256 ], 00:07:04.256 "product_name": "Malloc disk", 00:07:04.256 "block_size": 4096, 00:07:04.256 "num_blocks": 256, 00:07:04.256 "uuid": "6738926a-c359-4aa4-b8ec-46a9265e5eea", 00:07:04.256 "assigned_rate_limits": { 00:07:04.256 "rw_ios_per_sec": 0, 00:07:04.256 "rw_mbytes_per_sec": 0, 00:07:04.256 "r_mbytes_per_sec": 0, 00:07:04.256 "w_mbytes_per_sec": 0 00:07:04.256 }, 00:07:04.256 "claimed": false, 00:07:04.256 "zoned": false, 00:07:04.256 "supported_io_types": { 00:07:04.256 "read": true, 00:07:04.256 "write": true, 00:07:04.256 "unmap": true, 00:07:04.256 "flush": true, 00:07:04.256 "reset": true, 00:07:04.256 "nvme_admin": false, 00:07:04.256 "nvme_io": false, 00:07:04.256 "nvme_io_md": false, 00:07:04.256 "write_zeroes": true, 00:07:04.256 "zcopy": true, 00:07:04.256 "get_zone_info": false, 00:07:04.256 "zone_management": false, 00:07:04.256 "zone_append": false, 00:07:04.256 "compare": false, 00:07:04.256 "compare_and_write": false, 00:07:04.256 "abort": true, 00:07:04.256 "seek_hole": false, 00:07:04.256 "seek_data": false, 00:07:04.256 "copy": true, 00:07:04.256 "nvme_iov_md": false 00:07:04.256 }, 00:07:04.256 "memory_domains": [ 00:07:04.256 { 00:07:04.256 "dma_device_id": "system", 00:07:04.256 "dma_device_type": 1 00:07:04.256 }, 00:07:04.256 { 00:07:04.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.256 "dma_device_type": 2 00:07:04.256 } 00:07:04.256 ], 00:07:04.256 "driver_specific": {} 00:07:04.256 } 00:07:04.256 ]' 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:04.256 20:26:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:04.256 00:07:04.256 real 0m0.142s 00:07:04.256 user 0m0.092s 00:07:04.256 sys 0m0.020s 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.256 20:26:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.256 ************************************ 00:07:04.256 END TEST rpc_plugins 00:07:04.256 ************************************ 00:07:04.256 20:26:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:04.256 20:26:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.256 20:26:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.256 20:26:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.515 ************************************ 00:07:04.515 START TEST rpc_trace_cmd_test 00:07:04.515 ************************************ 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:04.515 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1828363", 00:07:04.515 "tpoint_group_mask": "0x8", 00:07:04.515 "iscsi_conn": { 00:07:04.515 "mask": "0x2", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "scsi": { 00:07:04.515 "mask": "0x4", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "bdev": { 00:07:04.515 "mask": "0x8", 00:07:04.515 "tpoint_mask": "0xffffffffffffffff" 00:07:04.515 }, 00:07:04.515 "nvmf_rdma": { 00:07:04.515 "mask": "0x10", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "nvmf_tcp": { 00:07:04.515 "mask": "0x20", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "ftl": { 00:07:04.515 "mask": "0x40", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "blobfs": { 00:07:04.515 "mask": "0x80", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "dsa": { 00:07:04.515 "mask": "0x200", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "thread": { 00:07:04.515 "mask": "0x400", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "nvme_pcie": { 00:07:04.515 "mask": "0x800", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "iaa": { 00:07:04.515 "mask": "0x1000", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "nvme_tcp": { 00:07:04.515 "mask": "0x2000", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "bdev_nvme": { 00:07:04.515 "mask": "0x4000", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "sock": { 00:07:04.515 "mask": "0x8000", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "blob": { 00:07:04.515 "mask": "0x10000", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "bdev_raid": { 00:07:04.515 "mask": "0x20000", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 }, 00:07:04.515 "scheduler": { 00:07:04.515 "mask": "0x40000", 00:07:04.515 "tpoint_mask": "0x0" 00:07:04.515 } 00:07:04.515 }' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:04.515 00:07:04.515 real 0m0.209s 00:07:04.515 user 0m0.172s 00:07:04.515 sys 0m0.030s 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.515 20:26:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.515 ************************************ 00:07:04.515 END TEST rpc_trace_cmd_test 00:07:04.515 ************************************ 00:07:04.775 20:26:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:04.775 20:26:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:04.775 20:26:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:04.775 20:26:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.775 20:26:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.775 20:26:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.775 ************************************ 00:07:04.775 START TEST rpc_daemon_integrity 00:07:04.775 ************************************ 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.775 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:04.775 { 00:07:04.775 "name": "Malloc2", 00:07:04.775 "aliases": [ 00:07:04.775 "1bd141fd-abaa-4860-899d-19b73badaabf" 00:07:04.775 ], 00:07:04.775 "product_name": "Malloc disk", 00:07:04.775 "block_size": 512, 00:07:04.775 "num_blocks": 16384, 00:07:04.775 "uuid": "1bd141fd-abaa-4860-899d-19b73badaabf", 00:07:04.775 "assigned_rate_limits": { 00:07:04.775 "rw_ios_per_sec": 0, 00:07:04.775 "rw_mbytes_per_sec": 0, 00:07:04.776 "r_mbytes_per_sec": 0, 00:07:04.776 "w_mbytes_per_sec": 0 00:07:04.776 }, 00:07:04.776 "claimed": false, 00:07:04.776 "zoned": false, 00:07:04.776 "supported_io_types": { 00:07:04.776 "read": true, 00:07:04.776 "write": true, 00:07:04.776 "unmap": true, 00:07:04.776 "flush": true, 00:07:04.776 "reset": true, 00:07:04.776 "nvme_admin": false, 00:07:04.776 "nvme_io": false, 00:07:04.776 "nvme_io_md": false, 00:07:04.776 "write_zeroes": true, 00:07:04.776 "zcopy": true, 00:07:04.776 "get_zone_info": false, 00:07:04.776 "zone_management": false, 00:07:04.776 "zone_append": false, 00:07:04.776 "compare": false, 00:07:04.776 "compare_and_write": false, 00:07:04.776 "abort": true, 00:07:04.776 "seek_hole": false, 00:07:04.776 "seek_data": false, 00:07:04.776 "copy": true, 00:07:04.776 "nvme_iov_md": false 00:07:04.776 }, 00:07:04.776 "memory_domains": [ 00:07:04.776 { 00:07:04.776 "dma_device_id": "system", 00:07:04.776 "dma_device_type": 1 00:07:04.776 }, 00:07:04.776 { 00:07:04.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.776 "dma_device_type": 2 00:07:04.776 } 00:07:04.776 ], 00:07:04.776 "driver_specific": {} 00:07:04.776 } 00:07:04.776 ]' 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.776 [2024-12-05 20:26:58.153121] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:04.776 [2024-12-05 20:26:58.153159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.776 [2024-12-05 20:26:58.153178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x60a3d30 00:07:04.776 [2024-12-05 20:26:58.153188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.776 [2024-12-05 20:26:58.154112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.776 [2024-12-05 20:26:58.154139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:04.776 Passthru0 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.776 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:04.776 { 00:07:04.776 "name": "Malloc2", 00:07:04.776 "aliases": [ 00:07:04.776 "1bd141fd-abaa-4860-899d-19b73badaabf" 00:07:04.776 ], 00:07:04.776 "product_name": "Malloc disk", 00:07:04.776 "block_size": 512, 00:07:04.776 "num_blocks": 16384, 00:07:04.776 "uuid": "1bd141fd-abaa-4860-899d-19b73badaabf", 00:07:04.776 "assigned_rate_limits": { 00:07:04.776 "rw_ios_per_sec": 0, 00:07:04.776 "rw_mbytes_per_sec": 0, 00:07:04.776 "r_mbytes_per_sec": 0, 00:07:04.776 "w_mbytes_per_sec": 0 00:07:04.776 }, 00:07:04.776 "claimed": true, 00:07:04.776 "claim_type": "exclusive_write", 00:07:04.776 "zoned": false, 00:07:04.776 "supported_io_types": { 00:07:04.776 "read": true, 00:07:04.776 "write": true, 00:07:04.776 "unmap": true, 00:07:04.776 "flush": true, 00:07:04.776 "reset": true, 00:07:04.776 "nvme_admin": false, 00:07:04.776 "nvme_io": false, 00:07:04.776 "nvme_io_md": false, 00:07:04.776 "write_zeroes": true, 00:07:04.776 "zcopy": true, 00:07:04.776 "get_zone_info": false, 00:07:04.776 "zone_management": false, 00:07:04.776 "zone_append": false, 00:07:04.776 "compare": false, 00:07:04.776 "compare_and_write": false, 00:07:04.776 "abort": true, 00:07:04.776 "seek_hole": false, 00:07:04.776 "seek_data": false, 00:07:04.776 "copy": true, 00:07:04.776 "nvme_iov_md": false 00:07:04.776 }, 00:07:04.776 "memory_domains": [ 00:07:04.776 { 00:07:04.776 "dma_device_id": "system", 00:07:04.776 "dma_device_type": 1 00:07:04.776 }, 00:07:04.776 { 00:07:04.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.776 "dma_device_type": 2 00:07:04.776 } 00:07:04.776 ], 00:07:04.776 "driver_specific": {} 00:07:04.776 }, 00:07:04.776 { 00:07:04.776 "name": "Passthru0", 00:07:04.776 "aliases": [ 00:07:04.776 "ee796a33-afbd-5049-ad1f-f6087db31aac" 00:07:04.776 ], 00:07:04.776 "product_name": "passthru", 00:07:04.776 "block_size": 512, 00:07:04.776 "num_blocks": 16384, 00:07:04.776 "uuid": "ee796a33-afbd-5049-ad1f-f6087db31aac", 00:07:04.776 "assigned_rate_limits": { 00:07:04.776 "rw_ios_per_sec": 0, 00:07:04.776 "rw_mbytes_per_sec": 0, 00:07:04.776 "r_mbytes_per_sec": 0, 00:07:04.776 "w_mbytes_per_sec": 0 00:07:04.776 }, 00:07:04.776 "claimed": false, 00:07:04.776 "zoned": false, 00:07:04.776 "supported_io_types": { 00:07:04.776 "read": true, 00:07:04.776 "write": true, 00:07:04.776 "unmap": true, 00:07:04.776 "flush": true, 00:07:04.776 "reset": true, 00:07:04.776 "nvme_admin": false, 00:07:04.776 "nvme_io": false, 00:07:04.776 "nvme_io_md": false, 00:07:04.776 "write_zeroes": true, 00:07:04.776 "zcopy": true, 00:07:04.776 "get_zone_info": false, 00:07:04.776 "zone_management": false, 00:07:04.776 "zone_append": false, 00:07:04.776 "compare": false, 00:07:04.776 "compare_and_write": false, 00:07:04.776 "abort": true, 00:07:04.776 "seek_hole": false, 00:07:04.776 "seek_data": false, 00:07:04.776 "copy": true, 00:07:04.777 "nvme_iov_md": false 00:07:04.777 }, 00:07:04.777 "memory_domains": [ 00:07:04.777 { 00:07:04.777 "dma_device_id": "system", 00:07:04.777 "dma_device_type": 1 00:07:04.777 }, 00:07:04.777 { 00:07:04.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.777 "dma_device_type": 2 00:07:04.777 } 00:07:04.777 ], 00:07:04.777 "driver_specific": { 00:07:04.777 "passthru": { 00:07:04.777 "name": "Passthru0", 00:07:04.777 "base_bdev_name": "Malloc2" 00:07:04.777 } 00:07:04.777 } 00:07:04.777 } 00:07:04.777 ]' 00:07:04.777 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:05.036 00:07:05.036 real 0m0.287s 00:07:05.036 user 0m0.172s 00:07:05.036 sys 0m0.058s 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.036 20:26:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:05.037 ************************************ 00:07:05.037 END TEST rpc_daemon_integrity 00:07:05.037 ************************************ 00:07:05.037 20:26:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:05.037 20:26:58 rpc -- rpc/rpc.sh@84 -- # killprocess 1828363 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 1828363 ']' 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@958 -- # kill -0 1828363 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@959 -- # uname 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1828363 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1828363' 00:07:05.037 killing process with pid 1828363 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@973 -- # kill 1828363 00:07:05.037 20:26:58 rpc -- common/autotest_common.sh@978 -- # wait 1828363 00:07:05.606 00:07:05.606 real 0m2.209s 00:07:05.606 user 0m2.742s 00:07:05.606 sys 0m0.847s 00:07:05.606 20:26:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.606 20:26:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.606 ************************************ 00:07:05.606 END TEST rpc 00:07:05.606 ************************************ 00:07:05.606 20:26:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:05.606 20:26:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.606 20:26:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.606 20:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:05.606 ************************************ 00:07:05.606 START TEST skip_rpc 00:07:05.606 ************************************ 00:07:05.606 20:26:58 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:05.606 * Looking for test storage... 00:07:05.606 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:07:05.606 20:26:58 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.606 20:26:58 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.606 20:26:58 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.606 20:26:59 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.606 20:26:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:05.606 20:26:59 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.606 20:26:59 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.606 --rc genhtml_branch_coverage=1 00:07:05.606 --rc genhtml_function_coverage=1 00:07:05.606 --rc genhtml_legend=1 00:07:05.606 --rc geninfo_all_blocks=1 00:07:05.606 --rc geninfo_unexecuted_blocks=1 00:07:05.606 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.606 ' 00:07:05.606 20:26:59 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.606 --rc genhtml_branch_coverage=1 00:07:05.606 --rc genhtml_function_coverage=1 00:07:05.606 --rc genhtml_legend=1 00:07:05.606 --rc geninfo_all_blocks=1 00:07:05.606 --rc geninfo_unexecuted_blocks=1 00:07:05.606 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.606 ' 00:07:05.606 20:26:59 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.606 --rc genhtml_branch_coverage=1 00:07:05.606 --rc genhtml_function_coverage=1 00:07:05.606 --rc genhtml_legend=1 00:07:05.606 --rc geninfo_all_blocks=1 00:07:05.606 --rc geninfo_unexecuted_blocks=1 00:07:05.606 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.606 ' 00:07:05.607 20:26:59 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.607 --rc genhtml_branch_coverage=1 00:07:05.607 --rc genhtml_function_coverage=1 00:07:05.607 --rc genhtml_legend=1 00:07:05.607 --rc geninfo_all_blocks=1 00:07:05.607 --rc geninfo_unexecuted_blocks=1 00:07:05.607 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.607 ' 00:07:05.607 20:26:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:05.607 20:26:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:05.607 20:26:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:05.607 20:26:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.607 20:26:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.607 20:26:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.887 ************************************ 00:07:05.888 START TEST skip_rpc 00:07:05.888 ************************************ 00:07:05.888 20:26:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:05.888 20:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1828866 00:07:05.888 20:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.888 20:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:05.888 20:26:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:05.888 [2024-12-05 20:26:59.095030] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:05.888 [2024-12-05 20:26:59.095108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828866 ] 00:07:05.888 [2024-12-05 20:26:59.168918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.888 [2024-12-05 20:26:59.214255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1828866 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1828866 ']' 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1828866 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1828866 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1828866' 00:07:11.161 killing process with pid 1828866 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1828866 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1828866 00:07:11.161 00:07:11.161 real 0m5.410s 00:07:11.161 user 0m5.146s 00:07:11.161 sys 0m0.303s 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.161 20:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.161 ************************************ 00:07:11.161 END TEST skip_rpc 00:07:11.161 ************************************ 00:07:11.161 20:27:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:11.161 20:27:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.161 20:27:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.161 20:27:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.161 ************************************ 00:07:11.161 START TEST skip_rpc_with_json 00:07:11.161 ************************************ 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1829903 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1829903 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1829903 ']' 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.161 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.161 [2024-12-05 20:27:04.574241] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:11.161 [2024-12-05 20:27:04.574308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829903 ] 00:07:11.421 [2024-12-05 20:27:04.647326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.421 [2024-12-05 20:27:04.695752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.680 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.680 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:11.680 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:11.680 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.680 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.680 [2024-12-05 20:27:04.940128] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:11.680 request: 00:07:11.680 { 00:07:11.680 "trtype": "tcp", 00:07:11.680 "method": "nvmf_get_transports", 00:07:11.680 "req_id": 1 00:07:11.680 } 00:07:11.680 Got JSON-RPC error response 00:07:11.680 response: 00:07:11.680 { 00:07:11.680 "code": -19, 00:07:11.680 "message": "No such device" 00:07:11.680 } 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.681 [2024-12-05 20:27:04.948222] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.681 20:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.681 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.681 20:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:11.939 { 00:07:11.939 "subsystems": [ 00:07:11.939 { 00:07:11.939 "subsystem": "scheduler", 00:07:11.939 "config": [ 00:07:11.939 { 00:07:11.939 "method": "framework_set_scheduler", 00:07:11.939 "params": { 00:07:11.940 "name": "static" 00:07:11.940 } 00:07:11.940 } 00:07:11.940 ] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "vmd", 00:07:11.940 "config": [] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "sock", 00:07:11.940 "config": [ 00:07:11.940 { 00:07:11.940 "method": "sock_set_default_impl", 00:07:11.940 "params": { 00:07:11.940 "impl_name": "posix" 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "sock_impl_set_options", 00:07:11.940 "params": { 00:07:11.940 "impl_name": "ssl", 00:07:11.940 "recv_buf_size": 4096, 00:07:11.940 "send_buf_size": 4096, 00:07:11.940 "enable_recv_pipe": true, 00:07:11.940 "enable_quickack": false, 00:07:11.940 "enable_placement_id": 0, 00:07:11.940 "enable_zerocopy_send_server": true, 00:07:11.940 "enable_zerocopy_send_client": false, 00:07:11.940 "zerocopy_threshold": 0, 00:07:11.940 "tls_version": 0, 00:07:11.940 "enable_ktls": false 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "sock_impl_set_options", 00:07:11.940 "params": { 00:07:11.940 "impl_name": "posix", 00:07:11.940 "recv_buf_size": 2097152, 00:07:11.940 "send_buf_size": 2097152, 00:07:11.940 "enable_recv_pipe": true, 00:07:11.940 "enable_quickack": false, 00:07:11.940 "enable_placement_id": 0, 00:07:11.940 "enable_zerocopy_send_server": true, 00:07:11.940 "enable_zerocopy_send_client": false, 00:07:11.940 "zerocopy_threshold": 0, 00:07:11.940 "tls_version": 0, 00:07:11.940 "enable_ktls": false 00:07:11.940 } 00:07:11.940 } 00:07:11.940 ] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "iobuf", 00:07:11.940 "config": [ 00:07:11.940 { 00:07:11.940 "method": "iobuf_set_options", 00:07:11.940 "params": { 00:07:11.940 "small_pool_count": 8192, 00:07:11.940 "large_pool_count": 1024, 00:07:11.940 "small_bufsize": 8192, 00:07:11.940 "large_bufsize": 135168, 00:07:11.940 "enable_numa": false 00:07:11.940 } 00:07:11.940 } 00:07:11.940 ] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "keyring", 00:07:11.940 "config": [] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "vfio_user_target", 00:07:11.940 "config": null 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "fsdev", 00:07:11.940 "config": [ 00:07:11.940 { 00:07:11.940 "method": "fsdev_set_opts", 00:07:11.940 "params": { 00:07:11.940 "fsdev_io_pool_size": 65535, 00:07:11.940 "fsdev_io_cache_size": 256 00:07:11.940 } 00:07:11.940 } 00:07:11.940 ] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "accel", 00:07:11.940 "config": [ 00:07:11.940 { 00:07:11.940 "method": "accel_set_options", 00:07:11.940 "params": { 00:07:11.940 "small_cache_size": 128, 00:07:11.940 "large_cache_size": 16, 00:07:11.940 "task_count": 2048, 00:07:11.940 "sequence_count": 2048, 00:07:11.940 "buf_count": 2048 00:07:11.940 } 00:07:11.940 } 00:07:11.940 ] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "bdev", 00:07:11.940 "config": [ 00:07:11.940 { 00:07:11.940 "method": "bdev_set_options", 00:07:11.940 "params": { 00:07:11.940 "bdev_io_pool_size": 65535, 00:07:11.940 "bdev_io_cache_size": 256, 00:07:11.940 "bdev_auto_examine": true, 00:07:11.940 "iobuf_small_cache_size": 128, 00:07:11.940 "iobuf_large_cache_size": 16 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "bdev_raid_set_options", 00:07:11.940 "params": { 00:07:11.940 "process_window_size_kb": 1024, 00:07:11.940 "process_max_bandwidth_mb_sec": 0 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "bdev_nvme_set_options", 00:07:11.940 "params": { 00:07:11.940 "action_on_timeout": "none", 00:07:11.940 "timeout_us": 0, 00:07:11.940 "timeout_admin_us": 0, 00:07:11.940 "keep_alive_timeout_ms": 10000, 00:07:11.940 "arbitration_burst": 0, 00:07:11.940 "low_priority_weight": 0, 00:07:11.940 "medium_priority_weight": 0, 00:07:11.940 "high_priority_weight": 0, 00:07:11.940 "nvme_adminq_poll_period_us": 10000, 00:07:11.940 "nvme_ioq_poll_period_us": 0, 00:07:11.940 "io_queue_requests": 0, 00:07:11.940 "delay_cmd_submit": true, 00:07:11.940 "transport_retry_count": 4, 00:07:11.940 "bdev_retry_count": 3, 00:07:11.940 "transport_ack_timeout": 0, 00:07:11.940 "ctrlr_loss_timeout_sec": 0, 00:07:11.940 "reconnect_delay_sec": 0, 00:07:11.940 "fast_io_fail_timeout_sec": 0, 00:07:11.940 "disable_auto_failback": false, 00:07:11.940 "generate_uuids": false, 00:07:11.940 "transport_tos": 0, 00:07:11.940 "nvme_error_stat": false, 00:07:11.940 "rdma_srq_size": 0, 00:07:11.940 "io_path_stat": false, 00:07:11.940 "allow_accel_sequence": false, 00:07:11.940 "rdma_max_cq_size": 0, 00:07:11.940 "rdma_cm_event_timeout_ms": 0, 00:07:11.940 "dhchap_digests": [ 00:07:11.940 "sha256", 00:07:11.940 "sha384", 00:07:11.940 "sha512" 00:07:11.940 ], 00:07:11.940 "dhchap_dhgroups": [ 00:07:11.940 "null", 00:07:11.940 "ffdhe2048", 00:07:11.940 "ffdhe3072", 00:07:11.940 "ffdhe4096", 00:07:11.940 "ffdhe6144", 00:07:11.940 "ffdhe8192" 00:07:11.940 ] 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "bdev_nvme_set_hotplug", 00:07:11.940 "params": { 00:07:11.940 "period_us": 100000, 00:07:11.940 "enable": false 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "bdev_iscsi_set_options", 00:07:11.940 "params": { 00:07:11.940 "timeout_sec": 30 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "bdev_wait_for_examine" 00:07:11.940 } 00:07:11.940 ] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "nvmf", 00:07:11.940 "config": [ 00:07:11.940 { 00:07:11.940 "method": "nvmf_set_config", 00:07:11.940 "params": { 00:07:11.940 "discovery_filter": "match_any", 00:07:11.940 "admin_cmd_passthru": { 00:07:11.940 "identify_ctrlr": false 00:07:11.940 }, 00:07:11.940 "dhchap_digests": [ 00:07:11.940 "sha256", 00:07:11.940 "sha384", 00:07:11.940 "sha512" 00:07:11.940 ], 00:07:11.940 "dhchap_dhgroups": [ 00:07:11.940 "null", 00:07:11.940 "ffdhe2048", 00:07:11.940 "ffdhe3072", 00:07:11.940 "ffdhe4096", 00:07:11.940 "ffdhe6144", 00:07:11.940 "ffdhe8192" 00:07:11.940 ] 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "nvmf_set_max_subsystems", 00:07:11.940 "params": { 00:07:11.940 "max_subsystems": 1024 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "nvmf_set_crdt", 00:07:11.940 "params": { 00:07:11.940 "crdt1": 0, 00:07:11.940 "crdt2": 0, 00:07:11.940 "crdt3": 0 00:07:11.940 } 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "method": "nvmf_create_transport", 00:07:11.940 "params": { 00:07:11.940 "trtype": "TCP", 00:07:11.940 "max_queue_depth": 128, 00:07:11.940 "max_io_qpairs_per_ctrlr": 127, 00:07:11.940 "in_capsule_data_size": 4096, 00:07:11.940 "max_io_size": 131072, 00:07:11.940 "io_unit_size": 131072, 00:07:11.940 "max_aq_depth": 128, 00:07:11.940 "num_shared_buffers": 511, 00:07:11.940 "buf_cache_size": 4294967295, 00:07:11.940 "dif_insert_or_strip": false, 00:07:11.940 "zcopy": false, 00:07:11.940 "c2h_success": true, 00:07:11.940 "sock_priority": 0, 00:07:11.940 "abort_timeout_sec": 1, 00:07:11.940 "ack_timeout": 0, 00:07:11.940 "data_wr_pool_size": 0 00:07:11.940 } 00:07:11.940 } 00:07:11.940 ] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "nbd", 00:07:11.940 "config": [] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "ublk", 00:07:11.940 "config": [] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "vhost_blk", 00:07:11.940 "config": [] 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "scsi", 00:07:11.940 "config": null 00:07:11.940 }, 00:07:11.940 { 00:07:11.940 "subsystem": "iscsi", 00:07:11.940 "config": [ 00:07:11.940 { 00:07:11.941 "method": "iscsi_set_options", 00:07:11.941 "params": { 00:07:11.941 "node_base": "iqn.2016-06.io.spdk", 00:07:11.941 "max_sessions": 128, 00:07:11.941 "max_connections_per_session": 2, 00:07:11.941 "max_queue_depth": 64, 00:07:11.941 "default_time2wait": 2, 00:07:11.941 "default_time2retain": 20, 00:07:11.941 "first_burst_length": 8192, 00:07:11.941 "immediate_data": true, 00:07:11.941 "allow_duplicated_isid": false, 00:07:11.941 "error_recovery_level": 0, 00:07:11.941 "nop_timeout": 60, 00:07:11.941 "nop_in_interval": 30, 00:07:11.941 "disable_chap": false, 00:07:11.941 "require_chap": false, 00:07:11.941 "mutual_chap": false, 00:07:11.941 "chap_group": 0, 00:07:11.941 "max_large_datain_per_connection": 64, 00:07:11.941 "max_r2t_per_connection": 4, 00:07:11.941 "pdu_pool_size": 36864, 00:07:11.941 "immediate_data_pool_size": 16384, 00:07:11.941 "data_out_pool_size": 2048 00:07:11.941 } 00:07:11.941 } 00:07:11.941 ] 00:07:11.941 }, 00:07:11.941 { 00:07:11.941 "subsystem": "vhost_scsi", 00:07:11.941 "config": [] 00:07:11.941 } 00:07:11.941 ] 00:07:11.941 } 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1829903 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1829903 ']' 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1829903 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1829903 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1829903' 00:07:11.941 killing process with pid 1829903 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1829903 00:07:11.941 20:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1829903 00:07:12.200 20:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1830243 00:07:12.200 20:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:12.200 20:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1830243 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1830243 ']' 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1830243 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1830243 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1830243' 00:07:17.471 killing process with pid 1830243 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1830243 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1830243 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:07:17.471 00:07:17.471 real 0m6.339s 00:07:17.471 user 0m5.982s 00:07:17.471 sys 0m0.642s 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.471 20:27:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.471 ************************************ 00:07:17.471 END TEST skip_rpc_with_json 00:07:17.471 ************************************ 00:07:17.731 20:27:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:17.731 20:27:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.731 20:27:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.731 20:27:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.731 ************************************ 00:07:17.731 START TEST skip_rpc_with_delay 00:07:17.731 ************************************ 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:17.731 20:27:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:17.731 [2024-12-05 20:27:11.010577] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:17.731 20:27:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:17.731 20:27:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.731 20:27:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.731 20:27:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.731 00:07:17.731 real 0m0.048s 00:07:17.731 user 0m0.020s 00:07:17.731 sys 0m0.028s 00:07:17.731 20:27:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.731 20:27:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:17.731 ************************************ 00:07:17.731 END TEST skip_rpc_with_delay 00:07:17.731 ************************************ 00:07:17.731 20:27:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:17.731 20:27:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:17.731 20:27:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:17.731 20:27:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.731 20:27:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.731 20:27:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.731 ************************************ 00:07:17.731 START TEST exit_on_failed_rpc_init 00:07:17.731 ************************************ 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1831118 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1831118 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1831118 ']' 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.731 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:17.731 [2024-12-05 20:27:11.143616] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:17.731 [2024-12-05 20:27:11.143693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831118 ] 00:07:17.990 [2024-12-05 20:27:11.217546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.990 [2024-12-05 20:27:11.266578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.250 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.250 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:18.250 20:27:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:18.251 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:18.251 [2024-12-05 20:27:11.523069] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:18.251 [2024-12-05 20:27:11.523141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831124 ] 00:07:18.251 [2024-12-05 20:27:11.596240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.251 [2024-12-05 20:27:11.641946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.251 [2024-12-05 20:27:11.642052] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:18.251 [2024-12-05 20:27:11.642066] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:18.251 [2024-12-05 20:27:11.642074] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1831118 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1831118 ']' 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1831118 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1831118 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1831118' 00:07:18.511 killing process with pid 1831118 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1831118 00:07:18.511 20:27:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1831118 00:07:18.771 00:07:18.771 real 0m0.955s 00:07:18.771 user 0m0.948s 00:07:18.771 sys 0m0.442s 00:07:18.771 20:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.771 20:27:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:18.771 ************************************ 00:07:18.771 END TEST exit_on_failed_rpc_init 00:07:18.771 ************************************ 00:07:18.771 20:27:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:07:18.771 00:07:18.771 real 0m13.292s 00:07:18.771 user 0m12.326s 00:07:18.771 sys 0m1.767s 00:07:18.772 20:27:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.772 20:27:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.772 ************************************ 00:07:18.772 END TEST skip_rpc 00:07:18.772 ************************************ 00:07:18.772 20:27:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:18.772 20:27:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.772 20:27:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.772 20:27:12 -- common/autotest_common.sh@10 -- # set +x 00:07:18.772 ************************************ 00:07:18.772 START TEST rpc_client 00:07:18.772 ************************************ 00:07:18.772 20:27:12 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:19.033 * Looking for test storage... 00:07:19.033 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.033 20:27:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.033 --rc genhtml_branch_coverage=1 00:07:19.033 --rc genhtml_function_coverage=1 00:07:19.033 --rc genhtml_legend=1 00:07:19.033 --rc geninfo_all_blocks=1 00:07:19.033 --rc geninfo_unexecuted_blocks=1 00:07:19.033 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.033 ' 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.033 --rc genhtml_branch_coverage=1 00:07:19.033 --rc genhtml_function_coverage=1 00:07:19.033 --rc genhtml_legend=1 00:07:19.033 --rc geninfo_all_blocks=1 00:07:19.033 --rc geninfo_unexecuted_blocks=1 00:07:19.033 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.033 ' 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.033 --rc genhtml_branch_coverage=1 00:07:19.033 --rc genhtml_function_coverage=1 00:07:19.033 --rc genhtml_legend=1 00:07:19.033 --rc geninfo_all_blocks=1 00:07:19.033 --rc geninfo_unexecuted_blocks=1 00:07:19.033 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.033 ' 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.033 --rc genhtml_branch_coverage=1 00:07:19.033 --rc genhtml_function_coverage=1 00:07:19.033 --rc genhtml_legend=1 00:07:19.033 --rc geninfo_all_blocks=1 00:07:19.033 --rc geninfo_unexecuted_blocks=1 00:07:19.033 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.033 ' 00:07:19.033 20:27:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:19.033 OK 00:07:19.033 20:27:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:19.033 00:07:19.033 real 0m0.213s 00:07:19.033 user 0m0.102s 00:07:19.033 sys 0m0.128s 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.033 20:27:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:19.033 ************************************ 00:07:19.033 END TEST rpc_client 00:07:19.033 ************************************ 00:07:19.033 20:27:12 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:19.033 20:27:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.033 20:27:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.033 20:27:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.294 ************************************ 00:07:19.294 START TEST json_config 00:07:19.294 ************************************ 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.294 20:27:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.294 20:27:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.294 20:27:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.294 20:27:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.294 20:27:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.294 20:27:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.294 20:27:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.294 20:27:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:19.294 20:27:12 json_config -- scripts/common.sh@345 -- # : 1 00:07:19.294 20:27:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.294 20:27:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.294 20:27:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:19.294 20:27:12 json_config -- scripts/common.sh@353 -- # local d=1 00:07:19.294 20:27:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.294 20:27:12 json_config -- scripts/common.sh@355 -- # echo 1 00:07:19.294 20:27:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.294 20:27:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@353 -- # local d=2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.294 20:27:12 json_config -- scripts/common.sh@355 -- # echo 2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.294 20:27:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.294 20:27:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.294 20:27:12 json_config -- scripts/common.sh@368 -- # return 0 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.294 --rc genhtml_branch_coverage=1 00:07:19.294 --rc genhtml_function_coverage=1 00:07:19.294 --rc genhtml_legend=1 00:07:19.294 --rc geninfo_all_blocks=1 00:07:19.294 --rc geninfo_unexecuted_blocks=1 00:07:19.294 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.294 ' 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.294 --rc genhtml_branch_coverage=1 00:07:19.294 --rc genhtml_function_coverage=1 00:07:19.294 --rc genhtml_legend=1 00:07:19.294 --rc geninfo_all_blocks=1 00:07:19.294 --rc geninfo_unexecuted_blocks=1 00:07:19.294 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.294 ' 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.294 --rc genhtml_branch_coverage=1 00:07:19.294 --rc genhtml_function_coverage=1 00:07:19.294 --rc genhtml_legend=1 00:07:19.294 --rc geninfo_all_blocks=1 00:07:19.294 --rc geninfo_unexecuted_blocks=1 00:07:19.294 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.294 ' 00:07:19.294 20:27:12 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.294 --rc genhtml_branch_coverage=1 00:07:19.294 --rc genhtml_function_coverage=1 00:07:19.294 --rc genhtml_legend=1 00:07:19.294 --rc geninfo_all_blocks=1 00:07:19.294 --rc geninfo_unexecuted_blocks=1 00:07:19.294 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.294 ' 00:07:19.294 20:27:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:19.294 20:27:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:19.295 20:27:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.295 20:27:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.295 20:27:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.295 20:27:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.295 20:27:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 20:27:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 20:27:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 20:27:12 json_config -- paths/export.sh@5 -- # export PATH 00:07:19.295 20:27:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@51 -- # : 0 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.295 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.295 20:27:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.295 20:27:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:19.295 20:27:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:19.295 20:27:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:19.295 20:27:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:19.295 20:27:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:19.295 20:27:12 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:19.295 WARNING: No tests are enabled so not running JSON configuration tests 00:07:19.295 20:27:12 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:19.295 00:07:19.295 real 0m0.167s 00:07:19.295 user 0m0.112s 00:07:19.295 sys 0m0.064s 00:07:19.295 20:27:12 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.295 20:27:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.295 ************************************ 00:07:19.295 END TEST json_config 00:07:19.295 ************************************ 00:07:19.295 20:27:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:19.295 20:27:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.295 20:27:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.295 20:27:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.556 ************************************ 00:07:19.556 START TEST json_config_extra_key 00:07:19.556 ************************************ 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.556 --rc genhtml_branch_coverage=1 00:07:19.556 --rc genhtml_function_coverage=1 00:07:19.556 --rc genhtml_legend=1 00:07:19.556 --rc geninfo_all_blocks=1 00:07:19.556 --rc geninfo_unexecuted_blocks=1 00:07:19.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.556 ' 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.556 --rc genhtml_branch_coverage=1 00:07:19.556 --rc genhtml_function_coverage=1 00:07:19.556 --rc genhtml_legend=1 00:07:19.556 --rc geninfo_all_blocks=1 00:07:19.556 --rc geninfo_unexecuted_blocks=1 00:07:19.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.556 ' 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.556 --rc genhtml_branch_coverage=1 00:07:19.556 --rc genhtml_function_coverage=1 00:07:19.556 --rc genhtml_legend=1 00:07:19.556 --rc geninfo_all_blocks=1 00:07:19.556 --rc geninfo_unexecuted_blocks=1 00:07:19.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.556 ' 00:07:19.556 20:27:12 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.556 --rc genhtml_branch_coverage=1 00:07:19.556 --rc genhtml_function_coverage=1 00:07:19.556 --rc genhtml_legend=1 00:07:19.556 --rc geninfo_all_blocks=1 00:07:19.556 --rc geninfo_unexecuted_blocks=1 00:07:19.556 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:19.556 ' 00:07:19.556 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.556 20:27:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.556 20:27:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.556 20:27:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.557 20:27:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.557 20:27:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.557 20:27:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:19.557 20:27:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.557 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.557 20:27:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:19.557 INFO: launching applications... 00:07:19.557 20:27:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1831476 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:19.557 Waiting for target to run... 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1831476 /var/tmp/spdk_tgt.sock 00:07:19.557 20:27:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1831476 ']' 00:07:19.557 20:27:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:07:19.557 20:27:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:19.557 20:27:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.557 20:27:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:19.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:19.557 20:27:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.557 20:27:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:19.557 [2024-12-05 20:27:12.980345] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:19.557 [2024-12-05 20:27:12.980430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831476 ] 00:07:20.125 [2024-12-05 20:27:13.471894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.125 [2024-12-05 20:27:13.526639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.692 20:27:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.692 20:27:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:20.692 20:27:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:20.692 00:07:20.692 20:27:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:20.692 INFO: shutting down applications... 00:07:20.692 20:27:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:20.692 20:27:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:20.693 20:27:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:20.693 20:27:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1831476 ]] 00:07:20.693 20:27:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1831476 00:07:20.693 20:27:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:20.693 20:27:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:20.693 20:27:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1831476 00:07:20.693 20:27:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:20.952 20:27:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:20.952 20:27:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:20.952 20:27:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1831476 00:07:20.952 20:27:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:20.952 20:27:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:20.952 20:27:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:20.952 20:27:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:20.952 SPDK target shutdown done 00:07:20.952 20:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:20.952 Success 00:07:20.952 00:07:20.952 real 0m1.604s 00:07:20.952 user 0m1.182s 00:07:20.952 sys 0m0.632s 00:07:20.952 20:27:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.952 20:27:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:20.952 ************************************ 00:07:20.952 END TEST json_config_extra_key 00:07:20.952 ************************************ 00:07:20.952 20:27:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:20.952 20:27:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.952 20:27:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.952 20:27:14 -- common/autotest_common.sh@10 -- # set +x 00:07:21.211 ************************************ 00:07:21.211 START TEST alias_rpc 00:07:21.211 ************************************ 00:07:21.211 20:27:14 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:21.211 * Looking for test storage... 00:07:21.211 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:07:21.211 20:27:14 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:21.211 20:27:14 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:21.211 20:27:14 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:21.211 20:27:14 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:21.211 20:27:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.212 20:27:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.212 --rc genhtml_branch_coverage=1 00:07:21.212 --rc genhtml_function_coverage=1 00:07:21.212 --rc genhtml_legend=1 00:07:21.212 --rc geninfo_all_blocks=1 00:07:21.212 --rc geninfo_unexecuted_blocks=1 00:07:21.212 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:21.212 ' 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.212 --rc genhtml_branch_coverage=1 00:07:21.212 --rc genhtml_function_coverage=1 00:07:21.212 --rc genhtml_legend=1 00:07:21.212 --rc geninfo_all_blocks=1 00:07:21.212 --rc geninfo_unexecuted_blocks=1 00:07:21.212 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:21.212 ' 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.212 --rc genhtml_branch_coverage=1 00:07:21.212 --rc genhtml_function_coverage=1 00:07:21.212 --rc genhtml_legend=1 00:07:21.212 --rc geninfo_all_blocks=1 00:07:21.212 --rc geninfo_unexecuted_blocks=1 00:07:21.212 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:21.212 ' 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:21.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.212 --rc genhtml_branch_coverage=1 00:07:21.212 --rc genhtml_function_coverage=1 00:07:21.212 --rc genhtml_legend=1 00:07:21.212 --rc geninfo_all_blocks=1 00:07:21.212 --rc geninfo_unexecuted_blocks=1 00:07:21.212 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:21.212 ' 00:07:21.212 20:27:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:21.212 20:27:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1831725 00:07:21.212 20:27:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1831725 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1831725 ']' 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.212 20:27:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:21.212 20:27:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.212 [2024-12-05 20:27:14.619308] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:21.212 [2024-12-05 20:27:14.619383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831725 ] 00:07:21.474 [2024-12-05 20:27:14.694249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.474 [2024-12-05 20:27:14.741298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.732 20:27:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.732 20:27:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.732 20:27:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:21.992 20:27:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1831725 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1831725 ']' 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1831725 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1831725 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1831725' 00:07:21.992 killing process with pid 1831725 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 1831725 00:07:21.992 20:27:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 1831725 00:07:22.252 00:07:22.252 real 0m1.127s 00:07:22.252 user 0m1.130s 00:07:22.252 sys 0m0.439s 00:07:22.252 20:27:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.252 20:27:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.252 ************************************ 00:07:22.252 END TEST alias_rpc 00:07:22.252 ************************************ 00:07:22.252 20:27:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:22.252 20:27:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:22.252 20:27:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.252 20:27:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.252 20:27:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.252 ************************************ 00:07:22.252 START TEST spdkcli_tcp 00:07:22.252 ************************************ 00:07:22.252 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:22.511 * Looking for test storage... 00:07:22.512 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.512 20:27:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:22.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.512 --rc genhtml_branch_coverage=1 00:07:22.512 --rc genhtml_function_coverage=1 00:07:22.512 --rc genhtml_legend=1 00:07:22.512 --rc geninfo_all_blocks=1 00:07:22.512 --rc geninfo_unexecuted_blocks=1 00:07:22.512 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:22.512 ' 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:22.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.512 --rc genhtml_branch_coverage=1 00:07:22.512 --rc genhtml_function_coverage=1 00:07:22.512 --rc genhtml_legend=1 00:07:22.512 --rc geninfo_all_blocks=1 00:07:22.512 --rc geninfo_unexecuted_blocks=1 00:07:22.512 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:22.512 ' 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:22.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.512 --rc genhtml_branch_coverage=1 00:07:22.512 --rc genhtml_function_coverage=1 00:07:22.512 --rc genhtml_legend=1 00:07:22.512 --rc geninfo_all_blocks=1 00:07:22.512 --rc geninfo_unexecuted_blocks=1 00:07:22.512 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:22.512 ' 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:22.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.512 --rc genhtml_branch_coverage=1 00:07:22.512 --rc genhtml_function_coverage=1 00:07:22.512 --rc genhtml_legend=1 00:07:22.512 --rc geninfo_all_blocks=1 00:07:22.512 --rc geninfo_unexecuted_blocks=1 00:07:22.512 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:22.512 ' 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1831966 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:22.512 20:27:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1831966 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1831966 ']' 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.512 20:27:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.512 [2024-12-05 20:27:15.835727] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:22.512 [2024-12-05 20:27:15.835791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831966 ] 00:07:22.512 [2024-12-05 20:27:15.906193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.771 [2024-12-05 20:27:15.956201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.771 [2024-12-05 20:27:15.956204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.771 20:27:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.771 20:27:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:22.771 20:27:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1832071 00:07:22.771 20:27:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:22.771 20:27:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:23.052 [ 00:07:23.052 "spdk_get_version", 00:07:23.052 "rpc_get_methods", 00:07:23.052 "notify_get_notifications", 00:07:23.052 "notify_get_types", 00:07:23.052 "trace_get_info", 00:07:23.052 "trace_get_tpoint_group_mask", 00:07:23.052 "trace_disable_tpoint_group", 00:07:23.052 "trace_enable_tpoint_group", 00:07:23.052 "trace_clear_tpoint_mask", 00:07:23.052 "trace_set_tpoint_mask", 00:07:23.052 "fsdev_set_opts", 00:07:23.052 "fsdev_get_opts", 00:07:23.052 "framework_get_pci_devices", 00:07:23.052 "framework_get_config", 00:07:23.052 "framework_get_subsystems", 00:07:23.052 "vfu_tgt_set_base_path", 00:07:23.052 "keyring_get_keys", 00:07:23.052 "iobuf_get_stats", 00:07:23.052 "iobuf_set_options", 00:07:23.052 "sock_get_default_impl", 00:07:23.052 "sock_set_default_impl", 00:07:23.052 "sock_impl_set_options", 00:07:23.052 "sock_impl_get_options", 00:07:23.052 "vmd_rescan", 00:07:23.052 "vmd_remove_device", 00:07:23.052 "vmd_enable", 00:07:23.052 "accel_get_stats", 00:07:23.052 "accel_set_options", 00:07:23.052 "accel_set_driver", 00:07:23.052 "accel_crypto_key_destroy", 00:07:23.052 "accel_crypto_keys_get", 00:07:23.052 "accel_crypto_key_create", 00:07:23.052 "accel_assign_opc", 00:07:23.052 "accel_get_module_info", 00:07:23.052 "accel_get_opc_assignments", 00:07:23.052 "bdev_get_histogram", 00:07:23.052 "bdev_enable_histogram", 00:07:23.052 "bdev_set_qos_limit", 00:07:23.052 "bdev_set_qd_sampling_period", 00:07:23.052 "bdev_get_bdevs", 00:07:23.052 "bdev_reset_iostat", 00:07:23.052 "bdev_get_iostat", 00:07:23.052 "bdev_examine", 00:07:23.052 "bdev_wait_for_examine", 00:07:23.052 "bdev_set_options", 00:07:23.052 "scsi_get_devices", 00:07:23.052 "thread_set_cpumask", 00:07:23.052 "scheduler_set_options", 00:07:23.052 "framework_get_governor", 00:07:23.052 "framework_get_scheduler", 00:07:23.052 "framework_set_scheduler", 00:07:23.052 "framework_get_reactors", 00:07:23.052 "thread_get_io_channels", 00:07:23.052 "thread_get_pollers", 00:07:23.052 "thread_get_stats", 00:07:23.052 "framework_monitor_context_switch", 00:07:23.052 "spdk_kill_instance", 00:07:23.052 "log_enable_timestamps", 00:07:23.052 "log_get_flags", 00:07:23.052 "log_clear_flag", 00:07:23.052 "log_set_flag", 00:07:23.052 "log_get_level", 00:07:23.052 "log_set_level", 00:07:23.052 "log_get_print_level", 00:07:23.052 "log_set_print_level", 00:07:23.052 "framework_enable_cpumask_locks", 00:07:23.052 "framework_disable_cpumask_locks", 00:07:23.052 "framework_wait_init", 00:07:23.052 "framework_start_init", 00:07:23.052 "virtio_blk_create_transport", 00:07:23.052 "virtio_blk_get_transports", 00:07:23.052 "vhost_controller_set_coalescing", 00:07:23.052 "vhost_get_controllers", 00:07:23.052 "vhost_delete_controller", 00:07:23.052 "vhost_create_blk_controller", 00:07:23.052 "vhost_scsi_controller_remove_target", 00:07:23.052 "vhost_scsi_controller_add_target", 00:07:23.052 "vhost_start_scsi_controller", 00:07:23.052 "vhost_create_scsi_controller", 00:07:23.052 "ublk_recover_disk", 00:07:23.052 "ublk_get_disks", 00:07:23.052 "ublk_stop_disk", 00:07:23.052 "ublk_start_disk", 00:07:23.052 "ublk_destroy_target", 00:07:23.052 "ublk_create_target", 00:07:23.052 "nbd_get_disks", 00:07:23.052 "nbd_stop_disk", 00:07:23.052 "nbd_start_disk", 00:07:23.052 "env_dpdk_get_mem_stats", 00:07:23.052 "nvmf_stop_mdns_prr", 00:07:23.052 "nvmf_publish_mdns_prr", 00:07:23.052 "nvmf_subsystem_get_listeners", 00:07:23.052 "nvmf_subsystem_get_qpairs", 00:07:23.052 "nvmf_subsystem_get_controllers", 00:07:23.052 "nvmf_get_stats", 00:07:23.052 "nvmf_get_transports", 00:07:23.052 "nvmf_create_transport", 00:07:23.052 "nvmf_get_targets", 00:07:23.052 "nvmf_delete_target", 00:07:23.052 "nvmf_create_target", 00:07:23.052 "nvmf_subsystem_allow_any_host", 00:07:23.052 "nvmf_subsystem_set_keys", 00:07:23.052 "nvmf_subsystem_remove_host", 00:07:23.052 "nvmf_subsystem_add_host", 00:07:23.052 "nvmf_ns_remove_host", 00:07:23.052 "nvmf_ns_add_host", 00:07:23.052 "nvmf_subsystem_remove_ns", 00:07:23.052 "nvmf_subsystem_set_ns_ana_group", 00:07:23.052 "nvmf_subsystem_add_ns", 00:07:23.052 "nvmf_subsystem_listener_set_ana_state", 00:07:23.052 "nvmf_discovery_get_referrals", 00:07:23.052 "nvmf_discovery_remove_referral", 00:07:23.052 "nvmf_discovery_add_referral", 00:07:23.052 "nvmf_subsystem_remove_listener", 00:07:23.052 "nvmf_subsystem_add_listener", 00:07:23.052 "nvmf_delete_subsystem", 00:07:23.052 "nvmf_create_subsystem", 00:07:23.052 "nvmf_get_subsystems", 00:07:23.052 "nvmf_set_crdt", 00:07:23.052 "nvmf_set_config", 00:07:23.052 "nvmf_set_max_subsystems", 00:07:23.052 "iscsi_get_histogram", 00:07:23.052 "iscsi_enable_histogram", 00:07:23.052 "iscsi_set_options", 00:07:23.052 "iscsi_get_auth_groups", 00:07:23.052 "iscsi_auth_group_remove_secret", 00:07:23.052 "iscsi_auth_group_add_secret", 00:07:23.052 "iscsi_delete_auth_group", 00:07:23.052 "iscsi_create_auth_group", 00:07:23.052 "iscsi_set_discovery_auth", 00:07:23.052 "iscsi_get_options", 00:07:23.053 "iscsi_target_node_request_logout", 00:07:23.053 "iscsi_target_node_set_redirect", 00:07:23.053 "iscsi_target_node_set_auth", 00:07:23.053 "iscsi_target_node_add_lun", 00:07:23.053 "iscsi_get_stats", 00:07:23.053 "iscsi_get_connections", 00:07:23.053 "iscsi_portal_group_set_auth", 00:07:23.053 "iscsi_start_portal_group", 00:07:23.053 "iscsi_delete_portal_group", 00:07:23.053 "iscsi_create_portal_group", 00:07:23.053 "iscsi_get_portal_groups", 00:07:23.053 "iscsi_delete_target_node", 00:07:23.053 "iscsi_target_node_remove_pg_ig_maps", 00:07:23.053 "iscsi_target_node_add_pg_ig_maps", 00:07:23.053 "iscsi_create_target_node", 00:07:23.053 "iscsi_get_target_nodes", 00:07:23.053 "iscsi_delete_initiator_group", 00:07:23.053 "iscsi_initiator_group_remove_initiators", 00:07:23.053 "iscsi_initiator_group_add_initiators", 00:07:23.053 "iscsi_create_initiator_group", 00:07:23.053 "iscsi_get_initiator_groups", 00:07:23.053 "fsdev_aio_delete", 00:07:23.053 "fsdev_aio_create", 00:07:23.053 "keyring_linux_set_options", 00:07:23.053 "keyring_file_remove_key", 00:07:23.053 "keyring_file_add_key", 00:07:23.053 "vfu_virtio_create_fs_endpoint", 00:07:23.053 "vfu_virtio_create_scsi_endpoint", 00:07:23.053 "vfu_virtio_scsi_remove_target", 00:07:23.053 "vfu_virtio_scsi_add_target", 00:07:23.053 "vfu_virtio_create_blk_endpoint", 00:07:23.053 "vfu_virtio_delete_endpoint", 00:07:23.053 "iaa_scan_accel_module", 00:07:23.053 "dsa_scan_accel_module", 00:07:23.053 "ioat_scan_accel_module", 00:07:23.053 "accel_error_inject_error", 00:07:23.053 "bdev_iscsi_delete", 00:07:23.053 "bdev_iscsi_create", 00:07:23.053 "bdev_iscsi_set_options", 00:07:23.053 "bdev_virtio_attach_controller", 00:07:23.053 "bdev_virtio_scsi_get_devices", 00:07:23.053 "bdev_virtio_detach_controller", 00:07:23.053 "bdev_virtio_blk_set_hotplug", 00:07:23.053 "bdev_ftl_set_property", 00:07:23.053 "bdev_ftl_get_properties", 00:07:23.053 "bdev_ftl_get_stats", 00:07:23.053 "bdev_ftl_unmap", 00:07:23.053 "bdev_ftl_unload", 00:07:23.053 "bdev_ftl_delete", 00:07:23.053 "bdev_ftl_load", 00:07:23.053 "bdev_ftl_create", 00:07:23.053 "bdev_aio_delete", 00:07:23.053 "bdev_aio_rescan", 00:07:23.053 "bdev_aio_create", 00:07:23.053 "blobfs_create", 00:07:23.053 "blobfs_detect", 00:07:23.053 "blobfs_set_cache_size", 00:07:23.053 "bdev_zone_block_delete", 00:07:23.053 "bdev_zone_block_create", 00:07:23.053 "bdev_delay_delete", 00:07:23.053 "bdev_delay_create", 00:07:23.053 "bdev_delay_update_latency", 00:07:23.053 "bdev_split_delete", 00:07:23.053 "bdev_split_create", 00:07:23.053 "bdev_error_inject_error", 00:07:23.053 "bdev_error_delete", 00:07:23.053 "bdev_error_create", 00:07:23.053 "bdev_raid_set_options", 00:07:23.053 "bdev_raid_remove_base_bdev", 00:07:23.053 "bdev_raid_add_base_bdev", 00:07:23.053 "bdev_raid_delete", 00:07:23.053 "bdev_raid_create", 00:07:23.053 "bdev_raid_get_bdevs", 00:07:23.053 "bdev_lvol_set_parent_bdev", 00:07:23.053 "bdev_lvol_set_parent", 00:07:23.053 "bdev_lvol_check_shallow_copy", 00:07:23.053 "bdev_lvol_start_shallow_copy", 00:07:23.053 "bdev_lvol_grow_lvstore", 00:07:23.053 "bdev_lvol_get_lvols", 00:07:23.053 "bdev_lvol_get_lvstores", 00:07:23.053 "bdev_lvol_delete", 00:07:23.053 "bdev_lvol_set_read_only", 00:07:23.053 "bdev_lvol_resize", 00:07:23.053 "bdev_lvol_decouple_parent", 00:07:23.053 "bdev_lvol_inflate", 00:07:23.053 "bdev_lvol_rename", 00:07:23.053 "bdev_lvol_clone_bdev", 00:07:23.053 "bdev_lvol_clone", 00:07:23.053 "bdev_lvol_snapshot", 00:07:23.053 "bdev_lvol_create", 00:07:23.053 "bdev_lvol_delete_lvstore", 00:07:23.053 "bdev_lvol_rename_lvstore", 00:07:23.053 "bdev_lvol_create_lvstore", 00:07:23.053 "bdev_passthru_delete", 00:07:23.053 "bdev_passthru_create", 00:07:23.053 "bdev_nvme_cuse_unregister", 00:07:23.053 "bdev_nvme_cuse_register", 00:07:23.053 "bdev_opal_new_user", 00:07:23.053 "bdev_opal_set_lock_state", 00:07:23.053 "bdev_opal_delete", 00:07:23.053 "bdev_opal_get_info", 00:07:23.053 "bdev_opal_create", 00:07:23.053 "bdev_nvme_opal_revert", 00:07:23.053 "bdev_nvme_opal_init", 00:07:23.053 "bdev_nvme_send_cmd", 00:07:23.053 "bdev_nvme_set_keys", 00:07:23.053 "bdev_nvme_get_path_iostat", 00:07:23.053 "bdev_nvme_get_mdns_discovery_info", 00:07:23.053 "bdev_nvme_stop_mdns_discovery", 00:07:23.053 "bdev_nvme_start_mdns_discovery", 00:07:23.053 "bdev_nvme_set_multipath_policy", 00:07:23.053 "bdev_nvme_set_preferred_path", 00:07:23.053 "bdev_nvme_get_io_paths", 00:07:23.053 "bdev_nvme_remove_error_injection", 00:07:23.053 "bdev_nvme_add_error_injection", 00:07:23.053 "bdev_nvme_get_discovery_info", 00:07:23.053 "bdev_nvme_stop_discovery", 00:07:23.053 "bdev_nvme_start_discovery", 00:07:23.053 "bdev_nvme_get_controller_health_info", 00:07:23.053 "bdev_nvme_disable_controller", 00:07:23.053 "bdev_nvme_enable_controller", 00:07:23.053 "bdev_nvme_reset_controller", 00:07:23.053 "bdev_nvme_get_transport_statistics", 00:07:23.053 "bdev_nvme_apply_firmware", 00:07:23.053 "bdev_nvme_detach_controller", 00:07:23.053 "bdev_nvme_get_controllers", 00:07:23.053 "bdev_nvme_attach_controller", 00:07:23.053 "bdev_nvme_set_hotplug", 00:07:23.053 "bdev_nvme_set_options", 00:07:23.053 "bdev_null_resize", 00:07:23.053 "bdev_null_delete", 00:07:23.053 "bdev_null_create", 00:07:23.053 "bdev_malloc_delete", 00:07:23.053 "bdev_malloc_create" 00:07:23.053 ] 00:07:23.053 20:27:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:23.053 20:27:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.053 20:27:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.053 20:27:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:23.053 20:27:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1831966 00:07:23.053 20:27:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1831966 ']' 00:07:23.053 20:27:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1831966 00:07:23.053 20:27:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:23.053 20:27:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.053 20:27:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1831966 00:07:23.334 20:27:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.334 20:27:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.334 20:27:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1831966' 00:07:23.334 killing process with pid 1831966 00:07:23.334 20:27:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1831966 00:07:23.334 20:27:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1831966 00:07:23.609 00:07:23.609 real 0m1.189s 00:07:23.609 user 0m2.002s 00:07:23.609 sys 0m0.498s 00:07:23.609 20:27:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.609 20:27:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.609 ************************************ 00:07:23.609 END TEST spdkcli_tcp 00:07:23.609 ************************************ 00:07:23.609 20:27:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:23.609 20:27:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.609 20:27:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.609 20:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:23.609 ************************************ 00:07:23.609 START TEST dpdk_mem_utility 00:07:23.609 ************************************ 00:07:23.609 20:27:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:23.609 * Looking for test storage... 00:07:23.609 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:07:23.609 20:27:16 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:23.609 20:27:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:23.609 20:27:16 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:23.955 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.955 20:27:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:23.955 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.955 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.955 --rc genhtml_branch_coverage=1 00:07:23.955 --rc genhtml_function_coverage=1 00:07:23.955 --rc genhtml_legend=1 00:07:23.955 --rc geninfo_all_blocks=1 00:07:23.955 --rc geninfo_unexecuted_blocks=1 00:07:23.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:23.955 ' 00:07:23.955 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.955 --rc genhtml_branch_coverage=1 00:07:23.955 --rc genhtml_function_coverage=1 00:07:23.955 --rc genhtml_legend=1 00:07:23.955 --rc geninfo_all_blocks=1 00:07:23.955 --rc geninfo_unexecuted_blocks=1 00:07:23.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:23.955 ' 00:07:23.955 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.955 --rc genhtml_branch_coverage=1 00:07:23.955 --rc genhtml_function_coverage=1 00:07:23.955 --rc genhtml_legend=1 00:07:23.955 --rc geninfo_all_blocks=1 00:07:23.955 --rc geninfo_unexecuted_blocks=1 00:07:23.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:23.955 ' 00:07:23.955 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.955 --rc genhtml_branch_coverage=1 00:07:23.955 --rc genhtml_function_coverage=1 00:07:23.955 --rc genhtml_legend=1 00:07:23.955 --rc geninfo_all_blocks=1 00:07:23.955 --rc geninfo_unexecuted_blocks=1 00:07:23.955 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:23.955 ' 00:07:23.955 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:23.955 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1832226 00:07:23.955 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:07:23.956 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1832226 00:07:23.956 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1832226 ']' 00:07:23.956 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.956 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.956 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.956 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.956 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:23.956 [2024-12-05 20:27:17.098981] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:23.956 [2024-12-05 20:27:17.099054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832226 ] 00:07:23.956 [2024-12-05 20:27:17.170634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.956 [2024-12-05 20:27:17.214029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.225 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.225 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:24.225 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:24.225 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:24.225 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.225 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:24.225 { 00:07:24.225 "filename": "/tmp/spdk_mem_dump.txt" 00:07:24.225 } 00:07:24.225 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.225 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:24.225 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:24.225 1 heaps totaling size 818.000000 MiB 00:07:24.225 size: 818.000000 MiB heap id: 0 00:07:24.225 end heaps---------- 00:07:24.225 9 mempools totaling size 603.782043 MiB 00:07:24.225 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:24.225 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:24.225 size: 100.555481 MiB name: bdev_io_1832226 00:07:24.225 size: 50.003479 MiB name: msgpool_1832226 00:07:24.225 size: 36.509338 MiB name: fsdev_io_1832226 00:07:24.225 size: 21.763794 MiB name: PDU_Pool 00:07:24.225 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:24.225 size: 4.133484 MiB name: evtpool_1832226 00:07:24.226 size: 0.026123 MiB name: Session_Pool 00:07:24.226 end mempools------- 00:07:24.226 6 memzones totaling size 4.142822 MiB 00:07:24.226 size: 1.000366 MiB name: RG_ring_0_1832226 00:07:24.226 size: 1.000366 MiB name: RG_ring_1_1832226 00:07:24.226 size: 1.000366 MiB name: RG_ring_4_1832226 00:07:24.226 size: 1.000366 MiB name: RG_ring_5_1832226 00:07:24.226 size: 0.125366 MiB name: RG_ring_2_1832226 00:07:24.226 size: 0.015991 MiB name: RG_ring_3_1832226 00:07:24.226 end memzones------- 00:07:24.226 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:24.226 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:24.226 list of free elements. size: 10.852478 MiB 00:07:24.226 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:24.226 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:24.226 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:24.226 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:24.226 element at address: 0x200008000000 with size: 0.959839 MiB 00:07:24.226 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:24.226 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:24.226 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:24.226 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:24.226 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:24.226 element at address: 0x200003e00000 with size: 0.490723 MiB 00:07:24.226 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:24.226 element at address: 0x200010600000 with size: 0.481934 MiB 00:07:24.226 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:24.226 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:24.226 list of standard malloc elements. size: 199.218628 MiB 00:07:24.226 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:07:24.226 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:07:24.226 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:24.226 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:24.226 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:24.226 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:24.226 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:24.226 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:24.226 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:24.226 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20000085b100 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000008df880 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20001067b600 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:24.226 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:24.226 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:24.226 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:24.226 list of memzone associated elements. size: 607.928894 MiB 00:07:24.226 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:24.226 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:24.226 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:24.226 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:24.226 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:24.226 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1832226_0 00:07:24.226 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:24.226 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1832226_0 00:07:24.226 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:07:24.226 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1832226_0 00:07:24.226 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:24.226 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:24.226 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:24.226 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:24.226 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:24.226 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1832226_0 00:07:24.226 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:24.226 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1832226 00:07:24.226 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:24.226 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1832226 00:07:24.226 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:07:24.226 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:24.226 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:24.226 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:24.226 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:07:24.226 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:24.226 element at address: 0x200003efde40 with size: 1.008118 MiB 00:07:24.226 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:24.226 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:24.226 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1832226 00:07:24.226 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:24.226 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1832226 00:07:24.226 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:24.226 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1832226 00:07:24.226 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:24.226 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1832226 00:07:24.226 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:07:24.226 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1832226 00:07:24.226 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:24.226 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1832226 00:07:24.226 element at address: 0x20001067b780 with size: 0.500488 MiB 00:07:24.226 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:24.226 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:07:24.226 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:24.226 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:24.226 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:24.226 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:24.226 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1832226 00:07:24.226 element at address: 0x2000008df940 with size: 0.125488 MiB 00:07:24.226 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1832226 00:07:24.226 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:07:24.226 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:24.226 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:24.226 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:24.227 element at address: 0x2000008db680 with size: 0.016113 MiB 00:07:24.227 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1832226 00:07:24.227 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:24.227 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:24.227 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:24.227 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1832226 00:07:24.227 element at address: 0x2000008db480 with size: 0.000305 MiB 00:07:24.227 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1832226 00:07:24.227 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:24.227 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1832226 00:07:24.227 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:24.227 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:24.227 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:24.227 20:27:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1832226 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1832226 ']' 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1832226 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1832226 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1832226' 00:07:24.227 killing process with pid 1832226 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1832226 00:07:24.227 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1832226 00:07:24.487 00:07:24.487 real 0m1.030s 00:07:24.487 user 0m0.931s 00:07:24.487 sys 0m0.434s 00:07:24.487 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.487 20:27:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:24.487 ************************************ 00:07:24.487 END TEST dpdk_mem_utility 00:07:24.487 ************************************ 00:07:24.746 20:27:17 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:24.746 20:27:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.746 20:27:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.746 20:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.746 ************************************ 00:07:24.746 START TEST event 00:07:24.746 ************************************ 00:07:24.746 20:27:18 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:07:24.746 * Looking for test storage... 00:07:24.746 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:24.746 20:27:18 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.746 20:27:18 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.746 20:27:18 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.746 20:27:18 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.746 20:27:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.746 20:27:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.746 20:27:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.746 20:27:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.746 20:27:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.746 20:27:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.746 20:27:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.746 20:27:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.746 20:27:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.746 20:27:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.746 20:27:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.746 20:27:18 event -- scripts/common.sh@344 -- # case "$op" in 00:07:24.746 20:27:18 event -- scripts/common.sh@345 -- # : 1 00:07:24.746 20:27:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.746 20:27:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.746 20:27:18 event -- scripts/common.sh@365 -- # decimal 1 00:07:25.005 20:27:18 event -- scripts/common.sh@353 -- # local d=1 00:07:25.005 20:27:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.005 20:27:18 event -- scripts/common.sh@355 -- # echo 1 00:07:25.005 20:27:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.005 20:27:18 event -- scripts/common.sh@366 -- # decimal 2 00:07:25.005 20:27:18 event -- scripts/common.sh@353 -- # local d=2 00:07:25.005 20:27:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.005 20:27:18 event -- scripts/common.sh@355 -- # echo 2 00:07:25.005 20:27:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.005 20:27:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.005 20:27:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.005 20:27:18 event -- scripts/common.sh@368 -- # return 0 00:07:25.005 20:27:18 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.005 20:27:18 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.005 --rc genhtml_branch_coverage=1 00:07:25.005 --rc genhtml_function_coverage=1 00:07:25.005 --rc genhtml_legend=1 00:07:25.005 --rc geninfo_all_blocks=1 00:07:25.005 --rc geninfo_unexecuted_blocks=1 00:07:25.005 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.005 ' 00:07:25.005 20:27:18 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.005 --rc genhtml_branch_coverage=1 00:07:25.005 --rc genhtml_function_coverage=1 00:07:25.005 --rc genhtml_legend=1 00:07:25.005 --rc geninfo_all_blocks=1 00:07:25.005 --rc geninfo_unexecuted_blocks=1 00:07:25.006 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.006 ' 00:07:25.006 20:27:18 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.006 --rc genhtml_branch_coverage=1 00:07:25.006 --rc genhtml_function_coverage=1 00:07:25.006 --rc genhtml_legend=1 00:07:25.006 --rc geninfo_all_blocks=1 00:07:25.006 --rc geninfo_unexecuted_blocks=1 00:07:25.006 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.006 ' 00:07:25.006 20:27:18 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.006 --rc genhtml_branch_coverage=1 00:07:25.006 --rc genhtml_function_coverage=1 00:07:25.006 --rc genhtml_legend=1 00:07:25.006 --rc geninfo_all_blocks=1 00:07:25.006 --rc geninfo_unexecuted_blocks=1 00:07:25.006 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:25.006 ' 00:07:25.006 20:27:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:25.006 20:27:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:25.006 20:27:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.006 20:27:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:25.006 20:27:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.006 20:27:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.006 ************************************ 00:07:25.006 START TEST event_perf 00:07:25.006 ************************************ 00:07:25.006 20:27:18 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.006 Running I/O for 1 seconds...[2024-12-05 20:27:18.248847] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:25.006 [2024-12-05 20:27:18.248933] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832472 ] 00:07:25.006 [2024-12-05 20:27:18.325751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.006 [2024-12-05 20:27:18.373893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.006 [2024-12-05 20:27:18.373982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.006 [2024-12-05 20:27:18.374060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.006 [2024-12-05 20:27:18.374059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.383 Running I/O for 1 seconds... 00:07:26.383 lcore 0: 191684 00:07:26.383 lcore 1: 191685 00:07:26.383 lcore 2: 191686 00:07:26.383 lcore 3: 191685 00:07:26.383 done. 00:07:26.383 00:07:26.383 real 0m1.187s 00:07:26.383 user 0m4.096s 00:07:26.383 sys 0m0.088s 00:07:26.383 20:27:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.383 20:27:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.383 ************************************ 00:07:26.383 END TEST event_perf 00:07:26.383 ************************************ 00:07:26.383 20:27:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:26.383 20:27:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:26.383 20:27:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.383 20:27:19 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.383 ************************************ 00:07:26.383 START TEST event_reactor 00:07:26.383 ************************************ 00:07:26.383 20:27:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:26.383 [2024-12-05 20:27:19.502782] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:26.383 [2024-12-05 20:27:19.502863] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832675 ] 00:07:26.383 [2024-12-05 20:27:19.578342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.383 [2024-12-05 20:27:19.623619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.319 test_start 00:07:27.319 oneshot 00:07:27.319 tick 100 00:07:27.319 tick 100 00:07:27.319 tick 250 00:07:27.319 tick 100 00:07:27.319 tick 100 00:07:27.319 tick 250 00:07:27.319 tick 100 00:07:27.319 tick 500 00:07:27.319 tick 100 00:07:27.319 tick 100 00:07:27.319 tick 250 00:07:27.319 tick 100 00:07:27.319 tick 100 00:07:27.319 test_end 00:07:27.319 00:07:27.319 real 0m1.180s 00:07:27.319 user 0m1.090s 00:07:27.319 sys 0m0.084s 00:07:27.319 20:27:20 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.319 20:27:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:27.319 ************************************ 00:07:27.319 END TEST event_reactor 00:07:27.319 ************************************ 00:07:27.319 20:27:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:27.319 20:27:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:27.319 20:27:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.319 20:27:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.319 ************************************ 00:07:27.319 START TEST event_reactor_perf 00:07:27.319 ************************************ 00:07:27.319 20:27:20 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:27.578 [2024-12-05 20:27:20.757328] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:27.578 [2024-12-05 20:27:20.757428] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832876 ] 00:07:27.578 [2024-12-05 20:27:20.830314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.578 [2024-12-05 20:27:20.875791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.514 test_start 00:07:28.514 test_end 00:07:28.514 Performance: 958545 events per second 00:07:28.514 00:07:28.514 real 0m1.176s 00:07:28.514 user 0m1.088s 00:07:28.514 sys 0m0.084s 00:07:28.514 20:27:21 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.514 20:27:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:28.514 ************************************ 00:07:28.514 END TEST event_reactor_perf 00:07:28.514 ************************************ 00:07:28.774 20:27:21 event -- event/event.sh@49 -- # uname -s 00:07:28.774 20:27:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:28.774 20:27:21 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:28.774 20:27:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.774 20:27:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.774 20:27:21 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.774 ************************************ 00:07:28.774 START TEST event_scheduler 00:07:28.774 ************************************ 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:28.774 * Looking for test storage... 00:07:28.774 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.774 20:27:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:28.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.774 --rc genhtml_branch_coverage=1 00:07:28.774 --rc genhtml_function_coverage=1 00:07:28.774 --rc genhtml_legend=1 00:07:28.774 --rc geninfo_all_blocks=1 00:07:28.774 --rc geninfo_unexecuted_blocks=1 00:07:28.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:28.774 ' 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:28.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.774 --rc genhtml_branch_coverage=1 00:07:28.774 --rc genhtml_function_coverage=1 00:07:28.774 --rc genhtml_legend=1 00:07:28.774 --rc geninfo_all_blocks=1 00:07:28.774 --rc geninfo_unexecuted_blocks=1 00:07:28.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:28.774 ' 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:28.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.774 --rc genhtml_branch_coverage=1 00:07:28.774 --rc genhtml_function_coverage=1 00:07:28.774 --rc genhtml_legend=1 00:07:28.774 --rc geninfo_all_blocks=1 00:07:28.774 --rc geninfo_unexecuted_blocks=1 00:07:28.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:28.774 ' 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:28.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.774 --rc genhtml_branch_coverage=1 00:07:28.774 --rc genhtml_function_coverage=1 00:07:28.774 --rc genhtml_legend=1 00:07:28.774 --rc geninfo_all_blocks=1 00:07:28.774 --rc geninfo_unexecuted_blocks=1 00:07:28.774 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:28.774 ' 00:07:28.774 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:28.774 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1833114 00:07:28.774 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.774 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1833114 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1833114 ']' 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.774 20:27:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:28.774 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:29.034 [2024-12-05 20:27:22.219083] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:29.034 [2024-12-05 20:27:22.219174] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833114 ] 00:07:29.034 [2024-12-05 20:27:22.288573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.034 [2024-12-05 20:27:22.339324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.034 [2024-12-05 20:27:22.339346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.034 [2024-12-05 20:27:22.339425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.034 [2024-12-05 20:27:22.339427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.034 20:27:22 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.034 20:27:22 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:29.034 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:29.034 20:27:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.034 20:27:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.034 [2024-12-05 20:27:22.400144] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:29.034 [2024-12-05 20:27:22.400164] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:29.034 [2024-12-05 20:27:22.400179] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:29.034 [2024-12-05 20:27:22.400188] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:29.034 [2024-12-05 20:27:22.400195] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:29.034 20:27:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.034 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:29.034 20:27:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.034 20:27:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 [2024-12-05 20:27:22.480130] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:29.293 20:27:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:29.293 20:27:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.293 20:27:22 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 ************************************ 00:07:29.293 START TEST scheduler_create_thread 00:07:29.293 ************************************ 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 2 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 3 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 4 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 5 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 6 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 7 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 8 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.293 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.293 9 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.294 10 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.294 20:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.230 20:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.230 20:27:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:30.230 20:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.230 20:27:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.608 20:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.608 20:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:31.608 20:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:31.608 20:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.608 20:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.548 20:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.548 00:07:32.548 real 0m3.384s 00:07:32.548 user 0m0.023s 00:07:32.548 sys 0m0.009s 00:07:32.548 20:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.548 20:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.548 ************************************ 00:07:32.548 END TEST scheduler_create_thread 00:07:32.548 ************************************ 00:07:32.548 20:27:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:32.548 20:27:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1833114 00:07:32.548 20:27:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1833114 ']' 00:07:32.548 20:27:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1833114 00:07:32.548 20:27:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:32.548 20:27:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.548 20:27:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833114 00:07:32.806 20:27:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:32.806 20:27:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:32.806 20:27:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833114' 00:07:32.806 killing process with pid 1833114 00:07:32.806 20:27:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1833114 00:07:32.806 20:27:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1833114 00:07:33.064 [2024-12-05 20:27:26.283965] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:33.322 00:07:33.322 real 0m4.497s 00:07:33.322 user 0m7.887s 00:07:33.322 sys 0m0.442s 00:07:33.322 20:27:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.322 20:27:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.322 ************************************ 00:07:33.322 END TEST event_scheduler 00:07:33.322 ************************************ 00:07:33.322 20:27:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:33.322 20:27:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:33.322 20:27:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.322 20:27:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.322 20:27:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.322 ************************************ 00:07:33.322 START TEST app_repeat 00:07:33.322 ************************************ 00:07:33.322 20:27:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1833716 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1833716' 00:07:33.322 Process app_repeat pid: 1833716 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:33.322 spdk_app_start Round 0 00:07:33.322 20:27:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1833716 /var/tmp/spdk-nbd.sock 00:07:33.322 20:27:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1833716 ']' 00:07:33.322 20:27:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.322 20:27:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.322 20:27:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.322 20:27:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.322 20:27:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:33.322 [2024-12-05 20:27:26.630291] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:33.322 [2024-12-05 20:27:26.630382] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833716 ] 00:07:33.322 [2024-12-05 20:27:26.706249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.322 [2024-12-05 20:27:26.756375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.322 [2024-12-05 20:27:26.756378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.581 20:27:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.581 20:27:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:33.581 20:27:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.839 Malloc0 00:07:33.839 20:27:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:33.839 Malloc1 00:07:33.839 20:27:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.839 20:27:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:34.097 /dev/nbd0 00:07:34.097 20:27:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.097 20:27:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.097 1+0 records in 00:07:34.097 1+0 records out 00:07:34.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234632 s, 17.5 MB/s 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:34.097 20:27:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:34.097 20:27:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.097 20:27:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.097 20:27:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:34.356 /dev/nbd1 00:07:34.356 20:27:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:34.356 20:27:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.356 1+0 records in 00:07:34.356 1+0 records out 00:07:34.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264192 s, 15.5 MB/s 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:34.356 20:27:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:34.356 20:27:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.356 20:27:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.356 20:27:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.356 20:27:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.356 20:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.614 20:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.614 { 00:07:34.614 "nbd_device": "/dev/nbd0", 00:07:34.614 "bdev_name": "Malloc0" 00:07:34.614 }, 00:07:34.614 { 00:07:34.614 "nbd_device": "/dev/nbd1", 00:07:34.614 "bdev_name": "Malloc1" 00:07:34.614 } 00:07:34.614 ]' 00:07:34.614 20:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.614 20:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.614 { 00:07:34.614 "nbd_device": "/dev/nbd0", 00:07:34.614 "bdev_name": "Malloc0" 00:07:34.614 }, 00:07:34.614 { 00:07:34.614 "nbd_device": "/dev/nbd1", 00:07:34.614 "bdev_name": "Malloc1" 00:07:34.614 } 00:07:34.614 ]' 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.614 /dev/nbd1' 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.614 /dev/nbd1' 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:34.614 256+0 records in 00:07:34.614 256+0 records out 00:07:34.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108421 s, 96.7 MB/s 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.614 20:27:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.873 256+0 records in 00:07:34.873 256+0 records out 00:07:34.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021026 s, 49.9 MB/s 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.873 256+0 records in 00:07:34.873 256+0 records out 00:07:34.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224582 s, 46.7 MB/s 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.873 20:27:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.130 20:27:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.131 20:27:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:35.388 20:27:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:35.388 20:27:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:35.647 20:27:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:35.906 [2024-12-05 20:27:29.174914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.906 [2024-12-05 20:27:29.219431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.906 [2024-12-05 20:27:29.219433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.906 [2024-12-05 20:27:29.267530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:35.906 [2024-12-05 20:27:29.267585] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:39.193 20:27:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.193 20:27:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:39.193 spdk_app_start Round 1 00:07:39.193 20:27:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1833716 /var/tmp/spdk-nbd.sock 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1833716 ']' 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.193 20:27:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:39.193 20:27:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.193 Malloc0 00:07:39.193 20:27:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.193 Malloc1 00:07:39.193 20:27:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.193 20:27:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:39.452 /dev/nbd0 00:07:39.452 20:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.452 20:27:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.452 1+0 records in 00:07:39.452 1+0 records out 00:07:39.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233787 s, 17.5 MB/s 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.452 20:27:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:39.452 20:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.452 20:27:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.452 20:27:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:39.711 /dev/nbd1 00:07:39.711 20:27:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.711 20:27:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:39.711 1+0 records in 00:07:39.711 1+0 records out 00:07:39.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237423 s, 17.3 MB/s 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:39.711 20:27:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:39.711 20:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.711 20:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.711 20:27:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.711 20:27:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.711 20:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.970 { 00:07:39.970 "nbd_device": "/dev/nbd0", 00:07:39.970 "bdev_name": "Malloc0" 00:07:39.970 }, 00:07:39.970 { 00:07:39.970 "nbd_device": "/dev/nbd1", 00:07:39.970 "bdev_name": "Malloc1" 00:07:39.970 } 00:07:39.970 ]' 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.970 { 00:07:39.970 "nbd_device": "/dev/nbd0", 00:07:39.970 "bdev_name": "Malloc0" 00:07:39.970 }, 00:07:39.970 { 00:07:39.970 "nbd_device": "/dev/nbd1", 00:07:39.970 "bdev_name": "Malloc1" 00:07:39.970 } 00:07:39.970 ]' 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:39.970 /dev/nbd1' 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:39.970 /dev/nbd1' 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:39.970 256+0 records in 00:07:39.970 256+0 records out 00:07:39.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104821 s, 100 MB/s 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.970 20:27:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.229 256+0 records in 00:07:40.229 256+0 records out 00:07:40.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211124 s, 49.7 MB/s 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.229 256+0 records in 00:07:40.229 256+0 records out 00:07:40.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222703 s, 47.1 MB/s 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.229 20:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.488 20:27:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.747 20:27:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.747 20:27:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.006 20:27:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:41.265 [2024-12-05 20:27:34.527258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.265 [2024-12-05 20:27:34.572035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.265 [2024-12-05 20:27:34.572037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.265 [2024-12-05 20:27:34.620849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:41.265 [2024-12-05 20:27:34.620904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:44.552 20:27:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:44.552 20:27:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:44.552 spdk_app_start Round 2 00:07:44.552 20:27:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1833716 /var/tmp/spdk-nbd.sock 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1833716 ']' 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.552 20:27:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:44.552 20:27:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.552 Malloc0 00:07:44.552 20:27:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.552 Malloc1 00:07:44.552 20:27:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.552 20:27:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:44.811 /dev/nbd0 00:07:44.811 20:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.811 20:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:44.811 1+0 records in 00:07:44.811 1+0 records out 00:07:44.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226065 s, 18.1 MB/s 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.811 20:27:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:44.811 20:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.811 20:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.811 20:27:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:45.069 /dev/nbd1 00:07:45.069 20:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.069 20:27:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.069 1+0 records in 00:07:45.069 1+0 records out 00:07:45.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025521 s, 16.0 MB/s 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.069 20:27:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:45.069 20:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.069 20:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.069 20:27:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.069 20:27:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.070 20:27:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.327 20:27:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.327 { 00:07:45.327 "nbd_device": "/dev/nbd0", 00:07:45.327 "bdev_name": "Malloc0" 00:07:45.327 }, 00:07:45.327 { 00:07:45.327 "nbd_device": "/dev/nbd1", 00:07:45.327 "bdev_name": "Malloc1" 00:07:45.327 } 00:07:45.327 ]' 00:07:45.327 20:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.327 { 00:07:45.327 "nbd_device": "/dev/nbd0", 00:07:45.327 "bdev_name": "Malloc0" 00:07:45.327 }, 00:07:45.327 { 00:07:45.327 "nbd_device": "/dev/nbd1", 00:07:45.327 "bdev_name": "Malloc1" 00:07:45.327 } 00:07:45.327 ]' 00:07:45.327 20:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.328 /dev/nbd1' 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.328 /dev/nbd1' 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:45.328 256+0 records in 00:07:45.328 256+0 records out 00:07:45.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115585 s, 90.7 MB/s 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.328 20:27:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.586 256+0 records in 00:07:45.586 256+0 records out 00:07:45.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212702 s, 49.3 MB/s 00:07:45.586 20:27:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.586 20:27:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.586 256+0 records in 00:07:45.586 256+0 records out 00:07:45.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224912 s, 46.6 MB/s 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.587 20:27:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.845 20:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.103 20:27:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.103 20:27:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:46.361 20:27:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:46.619 [2024-12-05 20:27:39.885233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.619 [2024-12-05 20:27:39.928809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.619 [2024-12-05 20:27:39.928811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.619 [2024-12-05 20:27:39.969821] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:46.619 [2024-12-05 20:27:39.969869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:49.905 20:27:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1833716 /var/tmp/spdk-nbd.sock 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1833716 ']' 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:49.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:49.905 20:27:42 event.app_repeat -- event/event.sh@39 -- # killprocess 1833716 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1833716 ']' 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1833716 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833716 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833716' 00:07:49.905 killing process with pid 1833716 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1833716 00:07:49.905 20:27:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1833716 00:07:49.905 spdk_app_start is called in Round 0. 00:07:49.905 Shutdown signal received, stop current app iteration 00:07:49.905 Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 reinitialization... 00:07:49.905 spdk_app_start is called in Round 1. 00:07:49.905 Shutdown signal received, stop current app iteration 00:07:49.905 Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 reinitialization... 00:07:49.905 spdk_app_start is called in Round 2. 00:07:49.905 Shutdown signal received, stop current app iteration 00:07:49.905 Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 reinitialization... 00:07:49.905 spdk_app_start is called in Round 3. 00:07:49.905 Shutdown signal received, stop current app iteration 00:07:49.905 20:27:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:49.905 20:27:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:49.905 00:07:49.905 real 0m16.504s 00:07:49.905 user 0m35.595s 00:07:49.905 sys 0m3.224s 00:07:49.905 20:27:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.905 20:27:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:49.905 ************************************ 00:07:49.905 END TEST app_repeat 00:07:49.905 ************************************ 00:07:49.905 20:27:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:49.905 20:27:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:49.906 20:27:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.906 20:27:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.906 20:27:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:49.906 ************************************ 00:07:49.906 START TEST cpu_locks 00:07:49.906 ************************************ 00:07:49.906 20:27:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:49.906 * Looking for test storage... 00:07:49.906 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:49.906 20:27:43 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.906 20:27:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.906 20:27:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.165 20:27:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.166 20:27:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.166 --rc genhtml_branch_coverage=1 00:07:50.166 --rc genhtml_function_coverage=1 00:07:50.166 --rc genhtml_legend=1 00:07:50.166 --rc geninfo_all_blocks=1 00:07:50.166 --rc geninfo_unexecuted_blocks=1 00:07:50.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.166 ' 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.166 --rc genhtml_branch_coverage=1 00:07:50.166 --rc genhtml_function_coverage=1 00:07:50.166 --rc genhtml_legend=1 00:07:50.166 --rc geninfo_all_blocks=1 00:07:50.166 --rc geninfo_unexecuted_blocks=1 00:07:50.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.166 ' 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.166 --rc genhtml_branch_coverage=1 00:07:50.166 --rc genhtml_function_coverage=1 00:07:50.166 --rc genhtml_legend=1 00:07:50.166 --rc geninfo_all_blocks=1 00:07:50.166 --rc geninfo_unexecuted_blocks=1 00:07:50.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.166 ' 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.166 --rc genhtml_branch_coverage=1 00:07:50.166 --rc genhtml_function_coverage=1 00:07:50.166 --rc genhtml_legend=1 00:07:50.166 --rc geninfo_all_blocks=1 00:07:50.166 --rc geninfo_unexecuted_blocks=1 00:07:50.166 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:50.166 ' 00:07:50.166 20:27:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:50.166 20:27:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:50.166 20:27:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:50.166 20:27:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.166 20:27:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.166 ************************************ 00:07:50.166 START TEST default_locks 00:07:50.166 ************************************ 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1836157 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1836157 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1836157 ']' 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.166 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.166 [2024-12-05 20:27:43.429801] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:50.166 [2024-12-05 20:27:43.429868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836157 ] 00:07:50.166 [2024-12-05 20:27:43.502939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.166 [2024-12-05 20:27:43.549136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.425 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.425 20:27:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:50.425 20:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1836157 00:07:50.425 20:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1836157 00:07:50.425 20:27:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:50.994 lslocks: write error 00:07:50.994 20:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1836157 00:07:50.994 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1836157 ']' 00:07:50.994 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1836157 00:07:50.994 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:50.994 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.994 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836157 00:07:51.253 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.253 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.253 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836157' 00:07:51.253 killing process with pid 1836157 00:07:51.253 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1836157 00:07:51.253 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1836157 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1836157 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1836157 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1836157 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1836157 ']' 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.513 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1836157) - No such process 00:07:51.513 ERROR: process (pid: 1836157) is no longer running 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:51.513 00:07:51.513 real 0m1.385s 00:07:51.513 user 0m1.350s 00:07:51.513 sys 0m0.672s 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.513 20:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.513 ************************************ 00:07:51.513 END TEST default_locks 00:07:51.513 ************************************ 00:07:51.513 20:27:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:51.513 20:27:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.513 20:27:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.513 20:27:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.513 ************************************ 00:07:51.513 START TEST default_locks_via_rpc 00:07:51.513 ************************************ 00:07:51.513 20:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:51.513 20:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1836373 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1836373 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1836373 ']' 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.514 20:27:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.514 [2024-12-05 20:27:44.903583] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:51.514 [2024-12-05 20:27:44.903658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836373 ] 00:07:51.774 [2024-12-05 20:27:44.978372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.774 [2024-12-05 20:27:45.022715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1836373 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1836373 00:07:52.033 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1836373 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1836373 ']' 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1836373 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836373 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836373' 00:07:52.291 killing process with pid 1836373 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1836373 00:07:52.291 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1836373 00:07:52.549 00:07:52.549 real 0m1.112s 00:07:52.549 user 0m1.061s 00:07:52.549 sys 0m0.522s 00:07:52.808 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.808 20:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.808 ************************************ 00:07:52.808 END TEST default_locks_via_rpc 00:07:52.808 ************************************ 00:07:52.808 20:27:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:52.808 20:27:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.808 20:27:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.808 20:27:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.808 ************************************ 00:07:52.808 START TEST non_locking_app_on_locked_coremask 00:07:52.808 ************************************ 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1836579 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1836579 /var/tmp/spdk.sock 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1836579 ']' 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.808 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.808 [2024-12-05 20:27:46.080929] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:52.808 [2024-12-05 20:27:46.080995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836579 ] 00:07:52.808 [2024-12-05 20:27:46.154123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.808 [2024-12-05 20:27:46.203370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.067 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.067 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:53.067 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:53.067 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1836663 00:07:53.067 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1836663 /var/tmp/spdk2.sock 00:07:53.067 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1836663 ']' 00:07:53.067 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.068 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.068 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.068 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.068 20:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.068 [2024-12-05 20:27:46.444878] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:53.068 [2024-12-05 20:27:46.444937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836663 ] 00:07:53.327 [2024-12-05 20:27:46.541630] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.327 [2024-12-05 20:27:46.541668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.327 [2024-12-05 20:27:46.644184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.263 20:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.263 20:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:54.263 20:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1836579 00:07:54.263 20:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1836579 00:07:54.263 20:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:55.200 lslocks: write error 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1836579 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1836579 ']' 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1836579 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836579 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836579' 00:07:55.200 killing process with pid 1836579 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1836579 00:07:55.200 20:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1836579 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1836663 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1836663 ']' 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1836663 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836663 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836663' 00:07:55.769 killing process with pid 1836663 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1836663 00:07:55.769 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1836663 00:07:56.338 00:07:56.338 real 0m3.421s 00:07:56.338 user 0m3.557s 00:07:56.338 sys 0m1.239s 00:07:56.338 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.338 20:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.338 ************************************ 00:07:56.338 END TEST non_locking_app_on_locked_coremask 00:07:56.338 ************************************ 00:07:56.338 20:27:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:56.338 20:27:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.338 20:27:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.338 20:27:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.338 ************************************ 00:07:56.338 START TEST locking_app_on_unlocked_coremask 00:07:56.338 ************************************ 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1837153 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1837153 /var/tmp/spdk.sock 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1837153 ']' 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.338 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.338 [2024-12-05 20:27:49.595633] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:56.338 [2024-12-05 20:27:49.595702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837153 ] 00:07:56.338 [2024-12-05 20:27:49.667872] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:56.338 [2024-12-05 20:27:49.667907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.338 [2024-12-05 20:27:49.716691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1837165 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1837165 /var/tmp/spdk2.sock 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1837165 ']' 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.598 20:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.598 [2024-12-05 20:27:49.988079] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:56.598 [2024-12-05 20:27:49.988152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837165 ] 00:07:56.858 [2024-12-05 20:27:50.095542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.858 [2024-12-05 20:27:50.185107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.793 20:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.793 20:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:57.793 20:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1837165 00:07:57.793 20:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1837165 00:07:57.793 20:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.731 lslocks: write error 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1837153 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1837153 ']' 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1837153 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837153 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837153' 00:07:58.731 killing process with pid 1837153 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1837153 00:07:58.731 20:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1837153 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1837165 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1837165 ']' 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1837165 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837165 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837165' 00:07:59.301 killing process with pid 1837165 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1837165 00:07:59.301 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1837165 00:07:59.564 00:07:59.564 real 0m3.406s 00:07:59.564 user 0m3.570s 00:07:59.564 sys 0m1.277s 00:07:59.564 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.564 20:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.564 ************************************ 00:07:59.564 END TEST locking_app_on_unlocked_coremask 00:07:59.564 ************************************ 00:07:59.822 20:27:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:59.822 20:27:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.822 20:27:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.822 20:27:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.822 ************************************ 00:07:59.822 START TEST locking_app_on_locked_coremask 00:07:59.822 ************************************ 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1837572 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1837572 /var/tmp/spdk.sock 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1837572 ']' 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.822 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.822 [2024-12-05 20:27:53.090281] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:07:59.822 [2024-12-05 20:27:53.090361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837572 ] 00:07:59.822 [2024-12-05 20:27:53.165199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.822 [2024-12-05 20:27:53.214099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1837699 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1837699 /var/tmp/spdk2.sock 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1837699 /var/tmp/spdk2.sock 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1837699 /var/tmp/spdk2.sock 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1837699 ']' 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.083 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.084 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.084 20:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.084 [2024-12-05 20:27:53.467684] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:00.084 [2024-12-05 20:27:53.467780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837699 ] 00:08:00.393 [2024-12-05 20:27:53.571372] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1837572 has claimed it. 00:08:00.393 [2024-12-05 20:27:53.571419] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:00.714 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1837699) - No such process 00:08:00.714 ERROR: process (pid: 1837699) is no longer running 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1837572 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1837572 00:08:00.714 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.277 lslocks: write error 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1837572 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1837572 ']' 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1837572 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837572 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.277 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.278 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837572' 00:08:01.278 killing process with pid 1837572 00:08:01.278 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1837572 00:08:01.278 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1837572 00:08:01.534 00:08:01.534 real 0m1.875s 00:08:01.534 user 0m1.966s 00:08:01.534 sys 0m0.671s 00:08:01.534 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.534 20:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.534 ************************************ 00:08:01.534 END TEST locking_app_on_locked_coremask 00:08:01.534 ************************************ 00:08:01.534 20:27:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:01.534 20:27:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.534 20:27:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.534 20:27:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.791 ************************************ 00:08:01.791 START TEST locking_overlapped_coremask 00:08:01.791 ************************************ 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1837962 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1837962 /var/tmp/spdk.sock 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1837962 ']' 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:01.791 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.791 [2024-12-05 20:27:55.033697] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:01.791 [2024-12-05 20:27:55.033762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837962 ] 00:08:01.791 [2024-12-05 20:27:55.106356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.791 [2024-12-05 20:27:55.158062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.791 [2024-12-05 20:27:55.158149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.791 [2024-12-05 20:27:55.158151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1837972 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1837972 /var/tmp/spdk2.sock 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1837972 /var/tmp/spdk2.sock 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1837972 /var/tmp/spdk2.sock 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1837972 ']' 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.049 20:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.049 [2024-12-05 20:27:55.405249] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:02.049 [2024-12-05 20:27:55.405328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837972 ] 00:08:02.306 [2024-12-05 20:27:55.509163] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1837962 has claimed it. 00:08:02.306 [2024-12-05 20:27:55.509204] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:02.870 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1837972) - No such process 00:08:02.870 ERROR: process (pid: 1837972) is no longer running 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1837962 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1837962 ']' 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1837962 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837962 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837962' 00:08:02.870 killing process with pid 1837962 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1837962 00:08:02.870 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1837962 00:08:03.129 00:08:03.129 real 0m1.455s 00:08:03.129 user 0m4.046s 00:08:03.129 sys 0m0.418s 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.129 ************************************ 00:08:03.129 END TEST locking_overlapped_coremask 00:08:03.129 ************************************ 00:08:03.129 20:27:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:03.129 20:27:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.129 20:27:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.129 20:27:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.129 ************************************ 00:08:03.129 START TEST locking_overlapped_coremask_via_rpc 00:08:03.129 ************************************ 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1838182 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1838182 /var/tmp/spdk.sock 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1838182 ']' 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.129 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.387 [2024-12-05 20:27:56.576438] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:03.387 [2024-12-05 20:27:56.576510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838182 ] 00:08:03.387 [2024-12-05 20:27:56.650139] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.387 [2024-12-05 20:27:56.650171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.387 [2024-12-05 20:27:56.701043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.387 [2024-12-05 20:27:56.701134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.387 [2024-12-05 20:27:56.701137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1838188 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1838188 /var/tmp/spdk2.sock 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1838188 ']' 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.645 20:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.645 [2024-12-05 20:27:56.959248] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:03.645 [2024-12-05 20:27:56.959324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838188 ] 00:08:03.645 [2024-12-05 20:27:57.063389] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:03.645 [2024-12-05 20:27:57.063426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.903 [2024-12-05 20:27:57.168155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.903 [2024-12-05 20:27:57.168266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.903 [2024-12-05 20:27:57.168268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.468 [2024-12-05 20:27:57.852809] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1838182 has claimed it. 00:08:04.468 request: 00:08:04.468 { 00:08:04.468 "method": "framework_enable_cpumask_locks", 00:08:04.468 "req_id": 1 00:08:04.468 } 00:08:04.468 Got JSON-RPC error response 00:08:04.468 response: 00:08:04.468 { 00:08:04.468 "code": -32603, 00:08:04.468 "message": "Failed to claim CPU core: 2" 00:08:04.468 } 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1838182 /var/tmp/spdk.sock 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1838182 ']' 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.468 20:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1838188 /var/tmp/spdk2.sock 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1838188 ']' 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.727 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:04.986 00:08:04.986 real 0m1.726s 00:08:04.986 user 0m0.813s 00:08:04.986 sys 0m0.166s 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.986 20:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.986 ************************************ 00:08:04.986 END TEST locking_overlapped_coremask_via_rpc 00:08:04.986 ************************************ 00:08:04.986 20:27:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:04.986 20:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1838182 ]] 00:08:04.986 20:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1838182 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1838182 ']' 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1838182 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838182 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838182' 00:08:04.986 killing process with pid 1838182 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1838182 00:08:04.986 20:27:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1838182 00:08:05.552 20:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1838188 ]] 00:08:05.552 20:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1838188 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1838188 ']' 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1838188 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838188 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838188' 00:08:05.552 killing process with pid 1838188 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1838188 00:08:05.552 20:27:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1838188 00:08:05.810 20:27:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:05.810 20:27:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:05.810 20:27:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1838182 ]] 00:08:05.810 20:27:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1838182 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1838182 ']' 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1838182 00:08:05.810 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1838182) - No such process 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1838182 is not found' 00:08:05.810 Process with pid 1838182 is not found 00:08:05.810 20:27:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1838188 ]] 00:08:05.810 20:27:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1838188 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1838188 ']' 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1838188 00:08:05.810 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1838188) - No such process 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1838188 is not found' 00:08:05.810 Process with pid 1838188 is not found 00:08:05.810 20:27:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:05.810 00:08:05.810 real 0m15.939s 00:08:05.810 user 0m26.517s 00:08:05.810 sys 0m6.075s 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.810 20:27:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.810 ************************************ 00:08:05.810 END TEST cpu_locks 00:08:05.810 ************************************ 00:08:05.810 00:08:05.810 real 0m41.151s 00:08:05.810 user 1m16.539s 00:08:05.810 sys 0m10.447s 00:08:05.810 20:27:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.810 20:27:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:05.810 ************************************ 00:08:05.810 END TEST event 00:08:05.810 ************************************ 00:08:05.810 20:27:59 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:05.810 20:27:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.810 20:27:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.810 20:27:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.810 ************************************ 00:08:05.810 START TEST thread 00:08:05.810 ************************************ 00:08:05.810 20:27:59 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:08:06.069 * Looking for test storage... 00:08:06.069 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:08:06.069 20:27:59 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.070 20:27:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.070 20:27:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.070 20:27:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.070 20:27:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.070 20:27:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.070 20:27:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.070 20:27:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.070 20:27:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.070 20:27:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.070 20:27:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.070 20:27:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.070 20:27:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:06.070 20:27:59 thread -- scripts/common.sh@345 -- # : 1 00:08:06.070 20:27:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.070 20:27:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.070 20:27:59 thread -- scripts/common.sh@365 -- # decimal 1 00:08:06.070 20:27:59 thread -- scripts/common.sh@353 -- # local d=1 00:08:06.070 20:27:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.070 20:27:59 thread -- scripts/common.sh@355 -- # echo 1 00:08:06.070 20:27:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.070 20:27:59 thread -- scripts/common.sh@366 -- # decimal 2 00:08:06.070 20:27:59 thread -- scripts/common.sh@353 -- # local d=2 00:08:06.070 20:27:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.070 20:27:59 thread -- scripts/common.sh@355 -- # echo 2 00:08:06.070 20:27:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.070 20:27:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.070 20:27:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.070 20:27:59 thread -- scripts/common.sh@368 -- # return 0 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.070 --rc genhtml_branch_coverage=1 00:08:06.070 --rc genhtml_function_coverage=1 00:08:06.070 --rc genhtml_legend=1 00:08:06.070 --rc geninfo_all_blocks=1 00:08:06.070 --rc geninfo_unexecuted_blocks=1 00:08:06.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:06.070 ' 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.070 --rc genhtml_branch_coverage=1 00:08:06.070 --rc genhtml_function_coverage=1 00:08:06.070 --rc genhtml_legend=1 00:08:06.070 --rc geninfo_all_blocks=1 00:08:06.070 --rc geninfo_unexecuted_blocks=1 00:08:06.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:06.070 ' 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.070 --rc genhtml_branch_coverage=1 00:08:06.070 --rc genhtml_function_coverage=1 00:08:06.070 --rc genhtml_legend=1 00:08:06.070 --rc geninfo_all_blocks=1 00:08:06.070 --rc geninfo_unexecuted_blocks=1 00:08:06.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:06.070 ' 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.070 --rc genhtml_branch_coverage=1 00:08:06.070 --rc genhtml_function_coverage=1 00:08:06.070 --rc genhtml_legend=1 00:08:06.070 --rc geninfo_all_blocks=1 00:08:06.070 --rc geninfo_unexecuted_blocks=1 00:08:06.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:06.070 ' 00:08:06.070 20:27:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.070 20:27:59 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.070 ************************************ 00:08:06.070 START TEST thread_poller_perf 00:08:06.070 ************************************ 00:08:06.070 20:27:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.070 [2024-12-05 20:27:59.479071] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:06.070 [2024-12-05 20:27:59.479179] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838659 ] 00:08:06.329 [2024-12-05 20:27:59.556693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.329 [2024-12-05 20:27:59.601777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.329 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:07.264 [2024-12-05T19:28:00.702Z] ====================================== 00:08:07.264 [2024-12-05T19:28:00.702Z] busy:2303707240 (cyc) 00:08:07.264 [2024-12-05T19:28:00.702Z] total_run_count: 812000 00:08:07.264 [2024-12-05T19:28:00.702Z] tsc_hz: 2300000000 (cyc) 00:08:07.264 [2024-12-05T19:28:00.702Z] ====================================== 00:08:07.264 [2024-12-05T19:28:00.702Z] poller_cost: 2837 (cyc), 1233 (nsec) 00:08:07.264 00:08:07.264 real 0m1.188s 00:08:07.264 user 0m1.097s 00:08:07.264 sys 0m0.086s 00:08:07.264 20:28:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.264 20:28:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:07.264 ************************************ 00:08:07.264 END TEST thread_poller_perf 00:08:07.264 ************************************ 00:08:07.264 20:28:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:07.264 20:28:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:07.264 20:28:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.264 20:28:00 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.523 ************************************ 00:08:07.523 START TEST thread_poller_perf 00:08:07.523 ************************************ 00:08:07.523 20:28:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:07.523 [2024-12-05 20:28:00.756569] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:07.523 [2024-12-05 20:28:00.756659] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838858 ] 00:08:07.523 [2024-12-05 20:28:00.833804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.523 [2024-12-05 20:28:00.882335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.523 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:08.894 [2024-12-05T19:28:02.332Z] ====================================== 00:08:08.894 [2024-12-05T19:28:02.332Z] busy:2301351336 (cyc) 00:08:08.894 [2024-12-05T19:28:02.332Z] total_run_count: 13062000 00:08:08.894 [2024-12-05T19:28:02.332Z] tsc_hz: 2300000000 (cyc) 00:08:08.894 [2024-12-05T19:28:02.332Z] ====================================== 00:08:08.894 [2024-12-05T19:28:02.332Z] poller_cost: 176 (cyc), 76 (nsec) 00:08:08.894 00:08:08.894 real 0m1.187s 00:08:08.894 user 0m1.111s 00:08:08.894 sys 0m0.072s 00:08:08.895 20:28:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.895 20:28:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:08.895 ************************************ 00:08:08.895 END TEST thread_poller_perf 00:08:08.895 ************************************ 00:08:08.895 20:28:01 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:08:08.895 20:28:01 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:08.895 20:28:01 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.895 20:28:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.895 20:28:01 thread -- common/autotest_common.sh@10 -- # set +x 00:08:08.895 ************************************ 00:08:08.895 START TEST thread_spdk_lock 00:08:08.895 ************************************ 00:08:08.895 20:28:02 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:08:08.895 [2024-12-05 20:28:02.030520] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:08.895 [2024-12-05 20:28:02.030616] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839058 ] 00:08:08.895 [2024-12-05 20:28:02.106624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.895 [2024-12-05 20:28:02.156925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.895 [2024-12-05 20:28:02.156927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.461 [2024-12-05 20:28:02.646658] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:09.461 [2024-12-05 20:28:02.646695] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:09.461 [2024-12-05 20:28:02.646706] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14dbe80 00:08:09.461 [2024-12-05 20:28:02.647380] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:09.461 [2024-12-05 20:28:02.647484] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:09.462 [2024-12-05 20:28:02.647503] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:08:09.462 Starting test contend 00:08:09.462 Worker Delay Wait us Hold us Total us 00:08:09.462 0 3 171569 185177 356747 00:08:09.462 1 5 88345 286239 374584 00:08:09.462 PASS test contend 00:08:09.462 Starting test hold_by_poller 00:08:09.462 PASS test hold_by_poller 00:08:09.462 Starting test hold_by_message 00:08:09.462 PASS test hold_by_message 00:08:09.462 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:08:09.462 100014 assertions passed 00:08:09.462 0 assertions failed 00:08:09.462 00:08:09.462 real 0m0.674s 00:08:09.462 user 0m1.075s 00:08:09.462 sys 0m0.086s 00:08:09.462 20:28:02 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.462 20:28:02 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:08:09.462 ************************************ 00:08:09.462 END TEST thread_spdk_lock 00:08:09.462 ************************************ 00:08:09.462 00:08:09.462 real 0m3.505s 00:08:09.462 user 0m3.470s 00:08:09.462 sys 0m0.547s 00:08:09.462 20:28:02 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.462 20:28:02 thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.462 ************************************ 00:08:09.462 END TEST thread 00:08:09.462 ************************************ 00:08:09.462 20:28:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:09.462 20:28:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:09.462 20:28:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.462 20:28:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.462 20:28:02 -- common/autotest_common.sh@10 -- # set +x 00:08:09.462 ************************************ 00:08:09.462 START TEST app_cmdline 00:08:09.462 ************************************ 00:08:09.462 20:28:02 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:09.721 * Looking for test storage... 00:08:09.721 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:09.721 20:28:02 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.721 20:28:02 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.721 20:28:02 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.721 20:28:02 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.721 20:28:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.721 20:28:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:09.721 20:28:03 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.721 20:28:03 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.721 --rc genhtml_branch_coverage=1 00:08:09.721 --rc genhtml_function_coverage=1 00:08:09.721 --rc genhtml_legend=1 00:08:09.721 --rc geninfo_all_blocks=1 00:08:09.721 --rc geninfo_unexecuted_blocks=1 00:08:09.721 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:09.721 ' 00:08:09.721 20:28:03 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.721 --rc genhtml_branch_coverage=1 00:08:09.721 --rc genhtml_function_coverage=1 00:08:09.721 --rc genhtml_legend=1 00:08:09.721 --rc geninfo_all_blocks=1 00:08:09.721 --rc geninfo_unexecuted_blocks=1 00:08:09.721 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:09.721 ' 00:08:09.721 20:28:03 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.721 --rc genhtml_branch_coverage=1 00:08:09.721 --rc genhtml_function_coverage=1 00:08:09.721 --rc genhtml_legend=1 00:08:09.721 --rc geninfo_all_blocks=1 00:08:09.721 --rc geninfo_unexecuted_blocks=1 00:08:09.721 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:09.721 ' 00:08:09.721 20:28:03 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:09.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.722 --rc genhtml_branch_coverage=1 00:08:09.722 --rc genhtml_function_coverage=1 00:08:09.722 --rc genhtml_legend=1 00:08:09.722 --rc geninfo_all_blocks=1 00:08:09.722 --rc geninfo_unexecuted_blocks=1 00:08:09.722 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:09.722 ' 00:08:09.722 20:28:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:09.722 20:28:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1839218 00:08:09.722 20:28:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:09.722 20:28:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1839218 00:08:09.722 20:28:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1839218 ']' 00:08:09.722 20:28:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.722 20:28:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.722 20:28:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.722 20:28:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.722 20:28:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.722 [2024-12-05 20:28:03.034264] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:09.722 [2024-12-05 20:28:03.034328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839218 ] 00:08:09.722 [2024-12-05 20:28:03.097884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.722 [2024-12-05 20:28:03.146397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.980 20:28:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.980 20:28:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:09.980 20:28:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:10.239 { 00:08:10.239 "version": "SPDK v25.01-pre git sha1 2c140f58f", 00:08:10.239 "fields": { 00:08:10.239 "major": 25, 00:08:10.239 "minor": 1, 00:08:10.239 "patch": 0, 00:08:10.239 "suffix": "-pre", 00:08:10.239 "commit": "2c140f58f" 00:08:10.239 } 00:08:10.239 } 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:10.239 20:28:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:08:10.239 20:28:03 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:10.497 request: 00:08:10.497 { 00:08:10.497 "method": "env_dpdk_get_mem_stats", 00:08:10.497 "req_id": 1 00:08:10.497 } 00:08:10.497 Got JSON-RPC error response 00:08:10.497 response: 00:08:10.497 { 00:08:10.497 "code": -32601, 00:08:10.497 "message": "Method not found" 00:08:10.497 } 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.497 20:28:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1839218 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1839218 ']' 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1839218 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1839218 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1839218' 00:08:10.497 killing process with pid 1839218 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@973 -- # kill 1839218 00:08:10.497 20:28:03 app_cmdline -- common/autotest_common.sh@978 -- # wait 1839218 00:08:10.756 00:08:10.756 real 0m1.360s 00:08:10.756 user 0m1.563s 00:08:10.756 sys 0m0.473s 00:08:10.756 20:28:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.756 20:28:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:10.756 ************************************ 00:08:10.756 END TEST app_cmdline 00:08:10.756 ************************************ 00:08:11.014 20:28:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:11.014 20:28:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.014 20:28:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.014 20:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:11.014 ************************************ 00:08:11.014 START TEST version 00:08:11.014 ************************************ 00:08:11.014 20:28:04 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:11.014 * Looking for test storage... 00:08:11.014 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:11.014 20:28:04 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.014 20:28:04 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.014 20:28:04 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.014 20:28:04 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.014 20:28:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.014 20:28:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.014 20:28:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.014 20:28:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.014 20:28:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.014 20:28:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.014 20:28:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.014 20:28:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.014 20:28:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.014 20:28:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.014 20:28:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.014 20:28:04 version -- scripts/common.sh@344 -- # case "$op" in 00:08:11.014 20:28:04 version -- scripts/common.sh@345 -- # : 1 00:08:11.014 20:28:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.014 20:28:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.272 20:28:04 version -- scripts/common.sh@365 -- # decimal 1 00:08:11.272 20:28:04 version -- scripts/common.sh@353 -- # local d=1 00:08:11.272 20:28:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.272 20:28:04 version -- scripts/common.sh@355 -- # echo 1 00:08:11.272 20:28:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.272 20:28:04 version -- scripts/common.sh@366 -- # decimal 2 00:08:11.272 20:28:04 version -- scripts/common.sh@353 -- # local d=2 00:08:11.272 20:28:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.272 20:28:04 version -- scripts/common.sh@355 -- # echo 2 00:08:11.272 20:28:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.272 20:28:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.272 20:28:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.272 20:28:04 version -- scripts/common.sh@368 -- # return 0 00:08:11.272 20:28:04 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.272 20:28:04 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.272 --rc genhtml_branch_coverage=1 00:08:11.272 --rc genhtml_function_coverage=1 00:08:11.272 --rc genhtml_legend=1 00:08:11.272 --rc geninfo_all_blocks=1 00:08:11.272 --rc geninfo_unexecuted_blocks=1 00:08:11.272 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.272 ' 00:08:11.272 20:28:04 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.272 --rc genhtml_branch_coverage=1 00:08:11.272 --rc genhtml_function_coverage=1 00:08:11.272 --rc genhtml_legend=1 00:08:11.272 --rc geninfo_all_blocks=1 00:08:11.272 --rc geninfo_unexecuted_blocks=1 00:08:11.272 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.272 ' 00:08:11.272 20:28:04 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.272 --rc genhtml_branch_coverage=1 00:08:11.272 --rc genhtml_function_coverage=1 00:08:11.272 --rc genhtml_legend=1 00:08:11.272 --rc geninfo_all_blocks=1 00:08:11.272 --rc geninfo_unexecuted_blocks=1 00:08:11.272 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.272 ' 00:08:11.272 20:28:04 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.273 --rc genhtml_branch_coverage=1 00:08:11.273 --rc genhtml_function_coverage=1 00:08:11.273 --rc genhtml_legend=1 00:08:11.273 --rc geninfo_all_blocks=1 00:08:11.273 --rc geninfo_unexecuted_blocks=1 00:08:11.273 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.273 ' 00:08:11.273 20:28:04 version -- app/version.sh@17 -- # get_header_version major 00:08:11.273 20:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # cut -f2 00:08:11.273 20:28:04 version -- app/version.sh@17 -- # major=25 00:08:11.273 20:28:04 version -- app/version.sh@18 -- # get_header_version minor 00:08:11.273 20:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # cut -f2 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:11.273 20:28:04 version -- app/version.sh@18 -- # minor=1 00:08:11.273 20:28:04 version -- app/version.sh@19 -- # get_header_version patch 00:08:11.273 20:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # cut -f2 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:11.273 20:28:04 version -- app/version.sh@19 -- # patch=0 00:08:11.273 20:28:04 version -- app/version.sh@20 -- # get_header_version suffix 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # cut -f2 00:08:11.273 20:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:11.273 20:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:11.273 20:28:04 version -- app/version.sh@20 -- # suffix=-pre 00:08:11.273 20:28:04 version -- app/version.sh@22 -- # version=25.1 00:08:11.273 20:28:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:11.273 20:28:04 version -- app/version.sh@28 -- # version=25.1rc0 00:08:11.273 20:28:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:11.273 20:28:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:11.273 20:28:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:11.273 20:28:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:11.273 00:08:11.273 real 0m0.282s 00:08:11.273 user 0m0.148s 00:08:11.273 sys 0m0.188s 00:08:11.273 20:28:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.273 20:28:04 version -- common/autotest_common.sh@10 -- # set +x 00:08:11.273 ************************************ 00:08:11.273 END TEST version 00:08:11.273 ************************************ 00:08:11.273 20:28:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:11.273 20:28:04 -- spdk/autotest.sh@194 -- # uname -s 00:08:11.273 20:28:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:11.273 20:28:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:11.273 20:28:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:11.273 20:28:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:11.273 20:28:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.273 20:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:11.273 20:28:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:08:11.273 20:28:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:08:11.273 20:28:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:08:11.273 20:28:04 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:08:11.273 20:28:04 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:11.273 20:28:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.273 20:28:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.273 20:28:04 -- common/autotest_common.sh@10 -- # set +x 00:08:11.273 ************************************ 00:08:11.273 START TEST llvm_fuzz 00:08:11.273 ************************************ 00:08:11.273 20:28:04 llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:11.532 * Looking for test storage... 00:08:11.532 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.532 20:28:04 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.532 --rc genhtml_branch_coverage=1 00:08:11.532 --rc genhtml_function_coverage=1 00:08:11.532 --rc genhtml_legend=1 00:08:11.532 --rc geninfo_all_blocks=1 00:08:11.532 --rc geninfo_unexecuted_blocks=1 00:08:11.532 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.532 ' 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.532 --rc genhtml_branch_coverage=1 00:08:11.532 --rc genhtml_function_coverage=1 00:08:11.532 --rc genhtml_legend=1 00:08:11.532 --rc geninfo_all_blocks=1 00:08:11.532 --rc geninfo_unexecuted_blocks=1 00:08:11.532 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.532 ' 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.532 --rc genhtml_branch_coverage=1 00:08:11.532 --rc genhtml_function_coverage=1 00:08:11.532 --rc genhtml_legend=1 00:08:11.532 --rc geninfo_all_blocks=1 00:08:11.532 --rc geninfo_unexecuted_blocks=1 00:08:11.532 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.532 ' 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.532 --rc genhtml_branch_coverage=1 00:08:11.532 --rc genhtml_function_coverage=1 00:08:11.532 --rc genhtml_legend=1 00:08:11.532 --rc geninfo_all_blocks=1 00:08:11.532 --rc geninfo_unexecuted_blocks=1 00:08:11.532 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.532 ' 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@550 -- # fuzzers=() 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@550 -- # local fuzzers 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@552 -- # [[ -n '' ]] 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@555 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@556 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:11.532 20:28:04 llvm_fuzz -- common/autotest_common.sh@559 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:11.532 20:28:04 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:11.533 20:28:04 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.533 20:28:04 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.533 20:28:04 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:11.533 ************************************ 00:08:11.533 START TEST nvmf_llvm_fuzz 00:08:11.533 ************************************ 00:08:11.533 20:28:04 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:11.793 * Looking for test storage... 00:08:11.793 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:11.793 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.794 --rc genhtml_branch_coverage=1 00:08:11.794 --rc genhtml_function_coverage=1 00:08:11.794 --rc genhtml_legend=1 00:08:11.794 --rc geninfo_all_blocks=1 00:08:11.794 --rc geninfo_unexecuted_blocks=1 00:08:11.794 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.794 ' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.794 --rc genhtml_branch_coverage=1 00:08:11.794 --rc genhtml_function_coverage=1 00:08:11.794 --rc genhtml_legend=1 00:08:11.794 --rc geninfo_all_blocks=1 00:08:11.794 --rc geninfo_unexecuted_blocks=1 00:08:11.794 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.794 ' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.794 --rc genhtml_branch_coverage=1 00:08:11.794 --rc genhtml_function_coverage=1 00:08:11.794 --rc genhtml_legend=1 00:08:11.794 --rc geninfo_all_blocks=1 00:08:11.794 --rc geninfo_unexecuted_blocks=1 00:08:11.794 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.794 ' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.794 --rc genhtml_branch_coverage=1 00:08:11.794 --rc genhtml_function_coverage=1 00:08:11.794 --rc genhtml_legend=1 00:08:11.794 --rc geninfo_all_blocks=1 00:08:11.794 --rc geninfo_unexecuted_blocks=1 00:08:11.794 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:11.794 ' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:11.794 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:11.795 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:11.796 #define SPDK_CONFIG_H 00:08:11.796 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:11.796 #define SPDK_CONFIG_APPS 1 00:08:11.796 #define SPDK_CONFIG_ARCH native 00:08:11.796 #undef SPDK_CONFIG_ASAN 00:08:11.796 #undef SPDK_CONFIG_AVAHI 00:08:11.796 #undef SPDK_CONFIG_CET 00:08:11.796 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:11.796 #define SPDK_CONFIG_COVERAGE 1 00:08:11.796 #define SPDK_CONFIG_CROSS_PREFIX 00:08:11.796 #undef SPDK_CONFIG_CRYPTO 00:08:11.796 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:11.796 #undef SPDK_CONFIG_CUSTOMOCF 00:08:11.796 #undef SPDK_CONFIG_DAOS 00:08:11.796 #define SPDK_CONFIG_DAOS_DIR 00:08:11.796 #define SPDK_CONFIG_DEBUG 1 00:08:11.796 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:11.796 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:11.796 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:11.796 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:11.796 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:11.796 #undef SPDK_CONFIG_DPDK_UADK 00:08:11.796 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:11.796 #define SPDK_CONFIG_EXAMPLES 1 00:08:11.796 #undef SPDK_CONFIG_FC 00:08:11.796 #define SPDK_CONFIG_FC_PATH 00:08:11.796 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:11.796 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:11.796 #define SPDK_CONFIG_FSDEV 1 00:08:11.796 #undef SPDK_CONFIG_FUSE 00:08:11.796 #define SPDK_CONFIG_FUZZER 1 00:08:11.796 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:11.796 #undef SPDK_CONFIG_GOLANG 00:08:11.796 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:11.796 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:11.796 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:11.796 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:11.796 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:11.796 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:11.796 #undef SPDK_CONFIG_HAVE_LZ4 00:08:11.796 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:11.796 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:11.796 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:11.796 #define SPDK_CONFIG_IDXD 1 00:08:11.796 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:11.796 #undef SPDK_CONFIG_IPSEC_MB 00:08:11.796 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:11.796 #define SPDK_CONFIG_ISAL 1 00:08:11.796 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:11.796 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:11.796 #define SPDK_CONFIG_LIBDIR 00:08:11.796 #undef SPDK_CONFIG_LTO 00:08:11.796 #define SPDK_CONFIG_MAX_LCORES 128 00:08:11.796 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:11.796 #define SPDK_CONFIG_NVME_CUSE 1 00:08:11.796 #undef SPDK_CONFIG_OCF 00:08:11.796 #define SPDK_CONFIG_OCF_PATH 00:08:11.796 #define SPDK_CONFIG_OPENSSL_PATH 00:08:11.796 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:11.796 #define SPDK_CONFIG_PGO_DIR 00:08:11.796 #undef SPDK_CONFIG_PGO_USE 00:08:11.796 #define SPDK_CONFIG_PREFIX /usr/local 00:08:11.796 #undef SPDK_CONFIG_RAID5F 00:08:11.796 #undef SPDK_CONFIG_RBD 00:08:11.796 #define SPDK_CONFIG_RDMA 1 00:08:11.796 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:11.796 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:11.796 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:11.796 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:11.796 #undef SPDK_CONFIG_SHARED 00:08:11.796 #undef SPDK_CONFIG_SMA 00:08:11.796 #define SPDK_CONFIG_TESTS 1 00:08:11.796 #undef SPDK_CONFIG_TSAN 00:08:11.796 #define SPDK_CONFIG_UBLK 1 00:08:11.796 #define SPDK_CONFIG_UBSAN 1 00:08:11.796 #undef SPDK_CONFIG_UNIT_TESTS 00:08:11.796 #undef SPDK_CONFIG_URING 00:08:11.796 #define SPDK_CONFIG_URING_PATH 00:08:11.796 #undef SPDK_CONFIG_URING_ZNS 00:08:11.796 #undef SPDK_CONFIG_USDT 00:08:11.796 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:11.796 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:11.796 #define SPDK_CONFIG_VFIO_USER 1 00:08:11.796 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:11.796 #define SPDK_CONFIG_VHOST 1 00:08:11.796 #define SPDK_CONFIG_VIRTIO 1 00:08:11.796 #undef SPDK_CONFIG_VTUNE 00:08:11.796 #define SPDK_CONFIG_VTUNE_DIR 00:08:11.796 #define SPDK_CONFIG_WERROR 1 00:08:11.796 #define SPDK_CONFIG_WPDK_DIR 00:08:11.796 #undef SPDK_CONFIG_XNVME 00:08:11.796 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:11.796 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:11.797 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.798 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1839668 ]] 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1839668 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.td3TZA 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:08:11.799 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.td3TZA/tests/nvmf /tmp/spdk.td3TZA 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:08:12.066 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=49490542592 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61734400000 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12243857408 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30862434304 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30867197952 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340969472 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346880000 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5910528 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30866624512 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30867202048 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=577536 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173425664 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173437952 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:08:12.067 * Looking for test storage... 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=49490542592 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=14458449920 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:12.067 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:08:12.067 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.068 --rc genhtml_branch_coverage=1 00:08:12.068 --rc genhtml_function_coverage=1 00:08:12.068 --rc genhtml_legend=1 00:08:12.068 --rc geninfo_all_blocks=1 00:08:12.068 --rc geninfo_unexecuted_blocks=1 00:08:12.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:12.068 ' 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.068 --rc genhtml_branch_coverage=1 00:08:12.068 --rc genhtml_function_coverage=1 00:08:12.068 --rc genhtml_legend=1 00:08:12.068 --rc geninfo_all_blocks=1 00:08:12.068 --rc geninfo_unexecuted_blocks=1 00:08:12.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:12.068 ' 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.068 --rc genhtml_branch_coverage=1 00:08:12.068 --rc genhtml_function_coverage=1 00:08:12.068 --rc genhtml_legend=1 00:08:12.068 --rc geninfo_all_blocks=1 00:08:12.068 --rc geninfo_unexecuted_blocks=1 00:08:12.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:12.068 ' 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:12.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.068 --rc genhtml_branch_coverage=1 00:08:12.068 --rc genhtml_function_coverage=1 00:08:12.068 --rc genhtml_legend=1 00:08:12.068 --rc geninfo_all_blocks=1 00:08:12.068 --rc geninfo_unexecuted_blocks=1 00:08:12.068 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:12.068 ' 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.068 20:28:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:08:12.068 [2024-12-05 20:28:05.415979] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:12.068 [2024-12-05 20:28:05.416066] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839732 ] 00:08:12.326 [2024-12-05 20:28:05.628213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.326 [2024-12-05 20:28:05.667061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.326 [2024-12-05 20:28:05.726704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.326 [2024-12-05 20:28:05.742943] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:12.326 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.326 INFO: Seed: 2312302662 00:08:12.583 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:12.583 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:12.583 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:12.583 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.583 #2 INITED exec/s: 0 rss: 67Mb 00:08:12.583 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.583 This may also happen if the target rejected all inputs we tried so far 00:08:12.583 [2024-12-05 20:28:05.820413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.583 [2024-12-05 20:28:05.820459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.841 NEW_FUNC[1/714]: 0x43bbe8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:12.841 NEW_FUNC[2/714]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.841 #18 NEW cov: 12069 ft: 12070 corp: 2/92b lim: 320 exec/s: 0 rss: 74Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:08:12.841 [2024-12-05 20:28:06.160769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.841 [2024-12-05 20:28:06.160811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.841 NEW_FUNC[1/1]: 0x19b6808 in nvme_get_transport /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:56 00:08:12.841 #24 NEW cov: 12204 ft: 12628 corp: 3/184b lim: 320 exec/s: 0 rss: 74Mb L: 92/92 MS: 1 InsertByte- 00:08:12.841 [2024-12-05 20:28:06.231351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.841 [2024-12-05 20:28:06.231382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.841 [2024-12-05 20:28:06.231476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.841 [2024-12-05 20:28:06.231493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:12.841 #25 NEW cov: 12214 ft: 13173 corp: 4/358b lim: 320 exec/s: 0 rss: 74Mb L: 174/174 MS: 1 CrossOver- 00:08:13.100 [2024-12-05 20:28:06.301530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.100 [2024-12-05 20:28:06.301558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.100 #27 NEW cov: 12299 ft: 13410 corp: 5/462b lim: 320 exec/s: 0 rss: 74Mb L: 104/174 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:13.100 [2024-12-05 20:28:06.352102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:08:13.100 [2024-12-05 20:28:06.352129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.100 [2024-12-05 20:28:06.352224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:13.100 [2024-12-05 20:28:06.352241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.100 NEW_FUNC[1/2]: 0x1542098 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2221 00:08:13.100 NEW_FUNC[2/2]: 0x1975bf8 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:08:13.100 #28 NEW cov: 12353 ft: 13529 corp: 6/646b lim: 320 exec/s: 0 rss: 75Mb L: 184/184 MS: 1 InsertRepeatedBytes- 00:08:13.100 [2024-12-05 20:28:06.422395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.100 [2024-12-05 20:28:06.422422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.100 [2024-12-05 20:28:06.422515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.100 [2024-12-05 20:28:06.422531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.100 #29 NEW cov: 12353 ft: 13645 corp: 7/820b lim: 320 exec/s: 0 rss: 75Mb L: 174/184 MS: 1 ShuffleBytes- 00:08:13.100 [2024-12-05 20:28:06.492658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.100 [2024-12-05 20:28:06.492686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.100 #30 NEW cov: 12353 ft: 13710 corp: 8/912b lim: 320 exec/s: 0 rss: 75Mb L: 92/184 MS: 1 InsertByte- 00:08:13.358 [2024-12-05 20:28:06.543228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.358 [2024-12-05 20:28:06.543256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.358 [2024-12-05 20:28:06.543343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:c7c7c7c7 cdw11:c7c7c7c7 00:08:13.358 [2024-12-05 20:28:06.543359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.358 #31 NEW cov: 12353 ft: 13746 corp: 9/1067b lim: 320 exec/s: 0 rss: 75Mb L: 155/184 MS: 1 InsertRepeatedBytes- 00:08:13.358 [2024-12-05 20:28:06.613927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.358 [2024-12-05 20:28:06.613953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.358 [2024-12-05 20:28:06.614050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:c7c7c7c7 cdw11:c7c7c7c7 00:08:13.358 [2024-12-05 20:28:06.614068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.358 [2024-12-05 20:28:06.614170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c7) qid:0 cid:6 nsid:c7c7c7c7 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x99 00:08:13.358 [2024-12-05 20:28:06.614186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.358 #32 NEW cov: 12353 ft: 13911 corp: 10/1318b lim: 320 exec/s: 0 rss: 75Mb L: 251/251 MS: 1 CopyPart- 00:08:13.358 [2024-12-05 20:28:06.684351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.358 [2024-12-05 20:28:06.684378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.358 [2024-12-05 20:28:06.684457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000006 00:08:13.358 [2024-12-05 20:28:06.684472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.358 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:13.358 #33 NEW cov: 12376 ft: 13971 corp: 11/1492b lim: 320 exec/s: 0 rss: 75Mb L: 174/251 MS: 1 ChangeBinInt- 00:08:13.358 [2024-12-05 20:28:06.754846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.358 [2024-12-05 20:28:06.754874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.358 [2024-12-05 20:28:06.754966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.358 [2024-12-05 20:28:06.754983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.358 #34 NEW cov: 12376 ft: 13986 corp: 12/1666b lim: 320 exec/s: 0 rss: 75Mb L: 174/251 MS: 1 ChangeBinInt- 00:08:13.616 [2024-12-05 20:28:06.805318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.616 [2024-12-05 20:28:06.805346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.616 [2024-12-05 20:28:06.805445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:c7c7c7c7 cdw11:c7c7c7c7 00:08:13.616 [2024-12-05 20:28:06.805462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.616 [2024-12-05 20:28:06.805568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c7) qid:0 cid:6 nsid:c7c7c7c7 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x99 00:08:13.616 [2024-12-05 20:28:06.805589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.616 #35 NEW cov: 12376 ft: 14099 corp: 13/1917b lim: 320 exec/s: 35 rss: 75Mb L: 251/251 MS: 1 ChangeBit- 00:08:13.616 [2024-12-05 20:28:06.875750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.616 [2024-12-05 20:28:06.875777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.616 [2024-12-05 20:28:06.875868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:c7c7c7c7 cdw11:c7c7c7c7 00:08:13.616 [2024-12-05 20:28:06.875884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.616 [2024-12-05 20:28:06.875987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c7) qid:0 cid:6 nsid:c7c7c7c7 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x99 00:08:13.616 [2024-12-05 20:28:06.876004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.616 #36 NEW cov: 12376 ft: 14141 corp: 14/2169b lim: 320 exec/s: 36 rss: 75Mb L: 252/252 MS: 1 InsertByte- 00:08:13.616 [2024-12-05 20:28:06.925885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.616 [2024-12-05 20:28:06.925912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.616 [2024-12-05 20:28:06.926009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.616 [2024-12-05 20:28:06.926026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.616 #37 NEW cov: 12376 ft: 14170 corp: 15/2331b lim: 320 exec/s: 37 rss: 75Mb L: 162/252 MS: 1 CrossOver- 00:08:13.616 [2024-12-05 20:28:06.975748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.617 [2024-12-05 20:28:06.975777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.617 #38 NEW cov: 12376 ft: 14204 corp: 16/2422b lim: 320 exec/s: 38 rss: 75Mb L: 91/252 MS: 1 ChangeBit- 00:08:13.617 [2024-12-05 20:28:07.026588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.617 [2024-12-05 20:28:07.026617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.617 [2024-12-05 20:28:07.026699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:99 cdw10:00000000 cdw11:00000000 00:08:13.617 [2024-12-05 20:28:07.026717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.617 [2024-12-05 20:28:07.026826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c7) qid:0 cid:6 nsid:c7c7c7c7 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.617 [2024-12-05 20:28:07.026842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.617 [2024-12-05 20:28:07.026938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c7) qid:0 cid:7 nsid:c7c7c7c7 cdw10:c7c7c7c7 cdw11:c7c7c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc7c7c7c7c7c7c7c7 00:08:13.617 [2024-12-05 20:28:07.026953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.617 #39 NEW cov: 12383 ft: 14430 corp: 17/2681b lim: 320 exec/s: 39 rss: 75Mb L: 259/259 MS: 1 CopyPart- 00:08:13.874 [2024-12-05 20:28:07.076100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.874 [2024-12-05 20:28:07.076132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.874 #40 NEW cov: 12383 ft: 14490 corp: 18/2775b lim: 320 exec/s: 40 rss: 75Mb L: 94/259 MS: 1 CrossOver- 00:08:13.874 [2024-12-05 20:28:07.146408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.874 [2024-12-05 20:28:07.146437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.874 #41 NEW cov: 12383 ft: 14503 corp: 19/2880b lim: 320 exec/s: 41 rss: 75Mb L: 105/259 MS: 1 InsertByte- 00:08:13.874 [2024-12-05 20:28:07.196877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.874 [2024-12-05 20:28:07.196905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.874 [2024-12-05 20:28:07.196989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.874 [2024-12-05 20:28:07.197007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.874 #42 NEW cov: 12383 ft: 14511 corp: 20/3054b lim: 320 exec/s: 42 rss: 75Mb L: 174/259 MS: 1 ChangeByte- 00:08:13.874 [2024-12-05 20:28:07.267339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.874 [2024-12-05 20:28:07.267368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.874 [2024-12-05 20:28:07.267456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000006 00:08:13.874 [2024-12-05 20:28:07.267474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.874 #43 NEW cov: 12383 ft: 14546 corp: 21/3228b lim: 320 exec/s: 43 rss: 75Mb L: 174/259 MS: 1 ChangeBinInt- 00:08:14.131 [2024-12-05 20:28:07.337404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:08:14.131 [2024-12-05 20:28:07.337432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.132 [2024-12-05 20:28:07.337532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:3fffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:08:14.132 [2024-12-05 20:28:07.337550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.132 #44 NEW cov: 12383 ft: 14558 corp: 22/3413b lim: 320 exec/s: 44 rss: 75Mb L: 185/259 MS: 1 InsertByte- 00:08:14.132 [2024-12-05 20:28:07.407381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (f2) qid:0 cid:4 nsid:0 cdw10:000000f9 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.132 [2024-12-05 20:28:07.407407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.132 #48 NEW cov: 12384 ft: 14583 corp: 23/3519b lim: 320 exec/s: 48 rss: 75Mb L: 106/259 MS: 4 InsertByte-InsertByte-CrossOver-InsertRepeatedBytes- 00:08:14.132 [2024-12-05 20:28:07.458228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:002d0000 00:08:14.132 [2024-12-05 20:28:07.458255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.132 [2024-12-05 20:28:07.458341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:c7c7c700 cdw11:c7c7c7c7 00:08:14.132 [2024-12-05 20:28:07.458358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.132 [2024-12-05 20:28:07.458466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (c7) qid:0 cid:6 nsid:c7c7c7c7 cdw10:00009900 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xc7c7c7c7c7c7c7 00:08:14.132 [2024-12-05 20:28:07.458481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.132 [2024-12-05 20:28:07.458572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:c7c7c7c7 cdw11:c7c7c7c7 00:08:14.132 [2024-12-05 20:28:07.458587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.132 #49 NEW cov: 12384 ft: 14631 corp: 24/3787b lim: 320 exec/s: 49 rss: 75Mb L: 268/268 MS: 1 CrossOver- 00:08:14.132 [2024-12-05 20:28:07.507609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.132 [2024-12-05 20:28:07.507635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.132 #50 NEW cov: 12384 ft: 14687 corp: 25/3879b lim: 320 exec/s: 50 rss: 75Mb L: 92/268 MS: 1 ChangeBinInt- 00:08:14.132 [2024-12-05 20:28:07.558439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.132 [2024-12-05 20:28:07.558464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.132 [2024-12-05 20:28:07.558551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00002d00 00:08:14.132 [2024-12-05 20:28:07.558566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.132 [2024-12-05 20:28:07.558646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:c7c70000 cdw10:00000000 cdw11:00000000 00:08:14.132 [2024-12-05 20:28:07.558662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.389 #51 NEW cov: 12384 ft: 14710 corp: 26/4130b lim: 320 exec/s: 51 rss: 75Mb L: 251/268 MS: 1 CrossOver- 00:08:14.389 [2024-12-05 20:28:07.608381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.389 [2024-12-05 20:28:07.608406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.389 #52 NEW cov: 12384 ft: 14723 corp: 27/4234b lim: 320 exec/s: 52 rss: 75Mb L: 104/268 MS: 1 ShuffleBytes- 00:08:14.389 [2024-12-05 20:28:07.658822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.389 [2024-12-05 20:28:07.658850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.389 [2024-12-05 20:28:07.658940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.389 [2024-12-05 20:28:07.658958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.389 #53 NEW cov: 12384 ft: 14735 corp: 28/4408b lim: 320 exec/s: 53 rss: 75Mb L: 174/268 MS: 1 CopyPart- 00:08:14.389 [2024-12-05 20:28:07.729411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.389 [2024-12-05 20:28:07.729436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.389 [2024-12-05 20:28:07.729527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.389 [2024-12-05 20:28:07.729542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.389 [2024-12-05 20:28:07.729632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:30000000 cdw11:00000000 00:08:14.389 [2024-12-05 20:28:07.729651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.389 #54 NEW cov: 12384 ft: 14744 corp: 29/4602b lim: 320 exec/s: 54 rss: 76Mb L: 194/268 MS: 1 CrossOver- 00:08:14.389 [2024-12-05 20:28:07.799176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.389 [2024-12-05 20:28:07.799202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.647 #55 NEW cov: 12384 ft: 14757 corp: 30/4708b lim: 320 exec/s: 27 rss: 76Mb L: 106/268 MS: 1 EraseBytes- 00:08:14.647 #55 DONE cov: 12384 ft: 14757 corp: 30/4708b lim: 320 exec/s: 27 rss: 76Mb 00:08:14.647 Done 55 runs in 2 second(s) 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.647 20:28:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:08:14.647 [2024-12-05 20:28:07.994421] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:14.647 [2024-12-05 20:28:07.994487] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840098 ] 00:08:14.905 [2024-12-05 20:28:08.200626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.905 [2024-12-05 20:28:08.240782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.905 [2024-12-05 20:28:08.300670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.905 [2024-12-05 20:28:08.316925] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:14.905 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.905 INFO: Seed: 592350778 00:08:15.162 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:15.162 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:15.162 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:15.162 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.162 #2 INITED exec/s: 0 rss: 67Mb 00:08:15.162 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.162 This may also happen if the target rejected all inputs we tried so far 00:08:15.162 [2024-12-05 20:28:08.394174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.162 [2024-12-05 20:28:08.394216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.420 NEW_FUNC[1/717]: 0x43c4e8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:15.420 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.420 #11 NEW cov: 12202 ft: 12189 corp: 2/10b lim: 30 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 ChangeByte-CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:15.420 [2024-12-05 20:28:08.744353] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:15.420 [2024-12-05 20:28:08.744821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.420 [2024-12-05 20:28:08.744872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.420 #12 NEW cov: 12339 ft: 12938 corp: 3/18b lim: 30 exec/s: 0 rss: 74Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:08:15.420 [2024-12-05 20:28:08.804614] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c32 00:08:15.420 [2024-12-05 20:28:08.805113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.420 [2024-12-05 20:28:08.805143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.420 #24 NEW cov: 12345 ft: 13109 corp: 4/27b lim: 30 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 ChangeByte-CMP- DE: "\322\017\007l2\002x\000"- 00:08:15.420 [2024-12-05 20:28:08.855102] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100009191 00:08:15.420 [2024-12-05 20:28:08.855534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a918191 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.420 [2024-12-05 20:28:08.855564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.678 #25 NEW cov: 12430 ft: 13444 corp: 5/35b lim: 30 exec/s: 0 rss: 74Mb L: 8/9 MS: 1 CopyPart- 00:08:15.678 [2024-12-05 20:28:08.925622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.678 [2024-12-05 20:28:08.925650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.678 #31 NEW cov: 12430 ft: 13533 corp: 6/44b lim: 30 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:15.678 [2024-12-05 20:28:08.995399] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c32 00:08:15.678 [2024-12-05 20:28:08.995922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.678 [2024-12-05 20:28:08.995951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.678 #32 NEW cov: 12430 ft: 13607 corp: 7/53b lim: 30 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CrossOver- 00:08:15.678 [2024-12-05 20:28:09.065842] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c32 00:08:15.678 [2024-12-05 20:28:09.066320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1ed2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.678 [2024-12-05 20:28:09.066350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.678 #36 NEW cov: 12430 ft: 13685 corp: 8/62b lim: 30 exec/s: 0 rss: 74Mb L: 9/9 MS: 4 ShuffleBytes-ChangeBit-ChangeBit-PersAutoDict- DE: "\322\017\007l2\002x\000"- 00:08:15.936 [2024-12-05 20:28:09.116116] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200003256 00:08:15.936 [2024-12-05 20:28:09.116598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:c9000278 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.936 [2024-12-05 20:28:09.116628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.936 #39 NEW cov: 12430 ft: 13724 corp: 9/72b lim: 30 exec/s: 0 rss: 74Mb L: 10/10 MS: 3 ChangeBit-InsertByte-CMP- DE: "\000x\0022V\244\360R"- 00:08:15.936 [2024-12-05 20:28:09.166563] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (236364) > buf size (4096) 00:08:15.936 [2024-12-05 20:28:09.167063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2000f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.936 [2024-12-05 20:28:09.167093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.936 #40 NEW cov: 12438 ft: 13792 corp: 10/82b lim: 30 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:08:15.936 [2024-12-05 20:28:09.237188] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3202 00:08:15.936 [2024-12-05 20:28:09.237688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d20f0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.936 [2024-12-05 20:28:09.237717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.936 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:15.936 #41 NEW cov: 12461 ft: 13878 corp: 11/91b lim: 30 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 PersAutoDict- DE: "\322\017\007l2\002x\000"- 00:08:15.936 [2024-12-05 20:28:09.307433] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (965756) > buf size (4096) 00:08:15.936 [2024-12-05 20:28:09.307955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:af1e83d2 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.936 [2024-12-05 20:28:09.307988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.936 #42 NEW cov: 12461 ft: 13947 corp: 12/101b lim: 30 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:08:16.193 [2024-12-05 20:28:09.377737] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c32 00:08:16.193 [2024-12-05 20:28:09.378015] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b6b6 00:08:16.193 [2024-12-05 20:28:09.378450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.193 [2024-12-05 20:28:09.378480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.193 [2024-12-05 20:28:09.378570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:027802b6 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.193 [2024-12-05 20:28:09.378587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.193 #43 NEW cov: 12461 ft: 14287 corp: 13/117b lim: 30 exec/s: 43 rss: 74Mb L: 16/16 MS: 1 InsertRepeatedBytes- 00:08:16.193 [2024-12-05 20:28:09.427994] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300005f5f 00:08:16.193 [2024-12-05 20:28:09.428270] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x232 00:08:16.193 [2024-12-05 20:28:09.428710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5f5f835f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.193 [2024-12-05 20:28:09.428741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.193 [2024-12-05 20:28:09.428838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:5fc90000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.193 [2024-12-05 20:28:09.428855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.193 #44 NEW cov: 12461 ft: 14295 corp: 14/134b lim: 30 exec/s: 44 rss: 74Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:08:16.193 [2024-12-05 20:28:09.498236] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c32 00:08:16.193 [2024-12-05 20:28:09.498726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1ed2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.193 [2024-12-05 20:28:09.498760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.193 #45 NEW cov: 12461 ft: 14314 corp: 15/144b lim: 30 exec/s: 45 rss: 74Mb L: 10/17 MS: 1 InsertByte- 00:08:16.193 [2024-12-05 20:28:09.548325] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3202 00:08:16.193 [2024-12-05 20:28:09.548766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d20f0007 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.193 [2024-12-05 20:28:09.548794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.193 #46 NEW cov: 12461 ft: 14340 corp: 16/153b lim: 30 exec/s: 46 rss: 74Mb L: 9/17 MS: 1 ChangeByte- 00:08:16.194 [2024-12-05 20:28:09.618623] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c32 00:08:16.194 [2024-12-05 20:28:09.619086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.194 [2024-12-05 20:28:09.619115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.451 #52 NEW cov: 12461 ft: 14361 corp: 17/163b lim: 30 exec/s: 52 rss: 75Mb L: 10/17 MS: 1 PersAutoDict- DE: "\322\017\007l2\002x\000"- 00:08:16.451 [2024-12-05 20:28:09.689342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.451 [2024-12-05 20:28:09.689382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.451 #53 NEW cov: 12461 ft: 14386 corp: 18/172b lim: 30 exec/s: 53 rss: 75Mb L: 9/17 MS: 1 ChangeBinInt- 00:08:16.451 [2024-12-05 20:28:09.739056] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (817996) > buf size (4096) 00:08:16.451 [2024-12-05 20:28:09.739545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1ed2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.451 [2024-12-05 20:28:09.739574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.451 #54 NEW cov: 12461 ft: 14407 corp: 19/181b lim: 30 exec/s: 54 rss: 75Mb L: 9/17 MS: 1 CopyPart- 00:08:16.451 [2024-12-05 20:28:09.789529] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (965756) > buf size (4096) 00:08:16.451 [2024-12-05 20:28:09.789814] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9d9 00:08:16.451 [2024-12-05 20:28:09.790084] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000d9d9 00:08:16.452 [2024-12-05 20:28:09.790551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:af1e83d2 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.452 [2024-12-05 20:28:09.790583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.452 [2024-12-05 20:28:09.790676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:32d981d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.452 [2024-12-05 20:28:09.790693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.452 [2024-12-05 20:28:09.790784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d9d981d9 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.452 [2024-12-05 20:28:09.790801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.452 #55 NEW cov: 12461 ft: 14717 corp: 20/202b lim: 30 exec/s: 55 rss: 75Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:08:16.452 [2024-12-05 20:28:09.859789] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c9d 00:08:16.452 [2024-12-05 20:28:09.860282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1ed2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.452 [2024-12-05 20:28:09.860313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.709 #56 NEW cov: 12461 ft: 14727 corp: 21/211b lim: 30 exec/s: 56 rss: 75Mb L: 9/21 MS: 1 ChangeBinInt- 00:08:16.709 [2024-12-05 20:28:09.930033] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c31 00:08:16.709 [2024-12-05 20:28:09.930576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1ed2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.709 [2024-12-05 20:28:09.930606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.709 #57 NEW cov: 12461 ft: 14734 corp: 22/220b lim: 30 exec/s: 57 rss: 75Mb L: 9/21 MS: 1 ChangeASCIIInt- 00:08:16.709 [2024-12-05 20:28:09.980040] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006c07 00:08:16.709 [2024-12-05 20:28:09.980513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:64e683d2 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.709 [2024-12-05 20:28:09.980542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.709 #58 NEW cov: 12461 ft: 14766 corp: 23/231b lim: 30 exec/s: 58 rss: 75Mb L: 11/21 MS: 1 InsertByte- 00:08:16.709 [2024-12-05 20:28:10.030655] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.709 [2024-12-05 20:28:10.030929] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.709 [2024-12-05 20:28:10.031216] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.709 [2024-12-05 20:28:10.031493] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.709 [2024-12-05 20:28:10.031956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.709 [2024-12-05 20:28:10.031991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.709 [2024-12-05 20:28:10.032090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.709 [2024-12-05 20:28:10.032108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.709 [2024-12-05 20:28:10.032203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.709 [2024-12-05 20:28:10.032221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.709 [2024-12-05 20:28:10.032322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.709 [2024-12-05 20:28:10.032341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.709 #60 NEW cov: 12461 ft: 15306 corp: 24/258b lim: 30 exec/s: 60 rss: 75Mb L: 27/27 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:16.709 [2024-12-05 20:28:10.090645] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (236364) > buf size (4096) 00:08:16.710 [2024-12-05 20:28:10.090948] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:08:16.710 [2024-12-05 20:28:10.091218] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (531892) > buf size (4096) 00:08:16.710 [2024-12-05 20:28:10.091700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2000f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.710 [2024-12-05 20:28:10.091732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.710 [2024-12-05 20:28:10.091824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.710 [2024-12-05 20:28:10.091841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.710 [2024-12-05 20:28:10.091935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:076c0232 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.710 [2024-12-05 20:28:10.091952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.710 #61 NEW cov: 12461 ft: 15343 corp: 25/276b lim: 30 exec/s: 61 rss: 75Mb L: 18/27 MS: 1 InsertRepeatedBytes- 00:08:16.710 [2024-12-05 20:28:10.140782] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (236364) > buf size (4096) 00:08:16.710 [2024-12-05 20:28:10.141073] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (127476) > buf size (4096) 00:08:16.710 [2024-12-05 20:28:10.141339] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (530868) > buf size (4096) 00:08:16.710 [2024-12-05 20:28:10.141808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2000f cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.710 [2024-12-05 20:28:10.141838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.710 [2024-12-05 20:28:10.141939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:7c7c007c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.710 [2024-12-05 20:28:10.141957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.710 [2024-12-05 20:28:10.142053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:066c0232 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.710 [2024-12-05 20:28:10.142071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.967 #62 NEW cov: 12461 ft: 15379 corp: 26/294b lim: 30 exec/s: 62 rss: 75Mb L: 18/27 MS: 1 ChangeBit- 00:08:16.967 [2024-12-05 20:28:10.210942] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1022796) > buf size (4096) 00:08:16.967 [2024-12-05 20:28:10.211461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:e6d2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.967 [2024-12-05 20:28:10.211491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.967 #63 NEW cov: 12461 ft: 15388 corp: 27/304b lim: 30 exec/s: 63 rss: 75Mb L: 10/27 MS: 1 InsertByte- 00:08:16.967 [2024-12-05 20:28:10.261532] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.967 [2024-12-05 20:28:10.261812] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.967 [2024-12-05 20:28:10.262074] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.967 [2024-12-05 20:28:10.262349] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.967 [2024-12-05 20:28:10.262819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.967 [2024-12-05 20:28:10.262847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.968 [2024-12-05 20:28:10.262937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.262954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.968 [2024-12-05 20:28:10.263045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.263063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.968 [2024-12-05 20:28:10.263154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.263172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.968 #64 NEW cov: 12461 ft: 15400 corp: 28/331b lim: 30 exec/s: 64 rss: 75Mb L: 27/27 MS: 1 ShuffleBytes- 00:08:16.968 [2024-12-05 20:28:10.331426] ctrlr.c:2669:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (817996) > buf size (4096) 00:08:16.968 [2024-12-05 20:28:10.331913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1ed2830f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.331944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.968 #65 NEW cov: 12461 ft: 15412 corp: 29/340b lim: 30 exec/s: 65 rss: 75Mb L: 9/27 MS: 1 ChangeASCIIInt- 00:08:16.968 [2024-12-05 20:28:10.381717] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.968 [2024-12-05 20:28:10.381990] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.968 [2024-12-05 20:28:10.382259] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009b9b 00:08:16.968 [2024-12-05 20:28:10.382514] ctrlr.c:2657:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300009a9b 00:08:16.968 [2024-12-05 20:28:10.382962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.382991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.968 [2024-12-05 20:28:10.383099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.383116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.968 [2024-12-05 20:28:10.383215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.383232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.968 [2024-12-05 20:28:10.383328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:9b9b839b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.968 [2024-12-05 20:28:10.383348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.225 #66 NEW cov: 12461 ft: 15464 corp: 30/367b lim: 30 exec/s: 33 rss: 75Mb L: 27/27 MS: 1 ChangeBit- 00:08:17.225 #66 DONE cov: 12461 ft: 15464 corp: 30/367b lim: 30 exec/s: 33 rss: 75Mb 00:08:17.225 ###### Recommended dictionary. ###### 00:08:17.225 "\322\017\007l2\002x\000" # Uses: 3 00:08:17.225 "\000x\0022V\244\360R" # Uses: 0 00:08:17.225 ###### End of recommended dictionary. ###### 00:08:17.225 Done 66 runs in 2 second(s) 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:17.226 20:28:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:08:17.226 [2024-12-05 20:28:10.583047] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:17.226 [2024-12-05 20:28:10.583120] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840473 ] 00:08:17.483 [2024-12-05 20:28:10.855186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.483 [2024-12-05 20:28:10.912004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.741 [2024-12-05 20:28:10.971299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.741 [2024-12-05 20:28:10.987566] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:17.741 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.741 INFO: Seed: 3262361779 00:08:17.741 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:17.741 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:17.741 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:17.741 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.741 #2 INITED exec/s: 0 rss: 67Mb 00:08:17.741 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.741 This may also happen if the target rejected all inputs we tried so far 00:08:17.741 [2024-12-05 20:28:11.065178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.741 [2024-12-05 20:28:11.065232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.999 NEW_FUNC[1/716]: 0x43ef98 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:17.999 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:17.999 #12 NEW cov: 12133 ft: 12115 corp: 2/10b lim: 35 exec/s: 0 rss: 74Mb L: 9/9 MS: 5 CMP-ChangeByte-InsertByte-InsertByte-CMP- DE: "\377D"-"\377\377\377\377"- 00:08:17.999 [2024-12-05 20:28:11.415409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.999 [2024-12-05 20:28:11.415453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.256 #13 NEW cov: 12263 ft: 12653 corp: 3/19b lim: 35 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ChangeBinInt- 00:08:18.256 [2024-12-05 20:28:11.485779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.256 [2024-12-05 20:28:11.485807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.256 [2024-12-05 20:28:11.485892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff005b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.256 [2024-12-05 20:28:11.485909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.256 #14 NEW cov: 12269 ft: 13225 corp: 4/34b lim: 35 exec/s: 0 rss: 74Mb L: 15/15 MS: 1 CrossOver- 00:08:18.256 [2024-12-05 20:28:11.535711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.256 [2024-12-05 20:28:11.535738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.256 #15 NEW cov: 12354 ft: 13562 corp: 5/46b lim: 35 exec/s: 0 rss: 74Mb L: 12/15 MS: 1 CopyPart- 00:08:18.256 [2024-12-05 20:28:11.605962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00fff7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.256 [2024-12-05 20:28:11.605990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.257 #16 NEW cov: 12354 ft: 13623 corp: 6/55b lim: 35 exec/s: 0 rss: 74Mb L: 9/15 MS: 1 ChangeBit- 00:08:18.257 [2024-12-05 20:28:11.656083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00f7c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.257 [2024-12-05 20:28:11.656113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.513 #17 NEW cov: 12354 ft: 13715 corp: 7/64b lim: 35 exec/s: 0 rss: 74Mb L: 9/15 MS: 1 ShuffleBytes- 00:08:18.513 [2024-12-05 20:28:11.726334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:4400ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.513 [2024-12-05 20:28:11.726360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.513 #18 NEW cov: 12354 ft: 13817 corp: 8/73b lim: 35 exec/s: 0 rss: 74Mb L: 9/15 MS: 1 PersAutoDict- DE: "\377D"- 00:08:18.513 [2024-12-05 20:28:11.776181] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:18.513 [2024-12-05 20:28:11.776874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:ff00c6ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.513 [2024-12-05 20:28:11.776907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.513 [2024-12-05 20:28:11.777003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fff600ff cdw11:ff005bff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.513 [2024-12-05 20:28:11.777020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.513 #19 NEW cov: 12365 ft: 13909 corp: 9/88b lim: 35 exec/s: 0 rss: 74Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:08:18.513 [2024-12-05 20:28:11.846835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.513 [2024-12-05 20:28:11.846865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.513 #20 NEW cov: 12365 ft: 13927 corp: 10/101b lim: 35 exec/s: 0 rss: 74Mb L: 13/15 MS: 1 InsertByte- 00:08:18.513 [2024-12-05 20:28:11.896881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:4400ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.513 [2024-12-05 20:28:11.896910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.513 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:18.513 #26 NEW cov: 12388 ft: 14018 corp: 11/110b lim: 35 exec/s: 0 rss: 74Mb L: 9/15 MS: 1 ChangeByte- 00:08:18.769 [2024-12-05 20:28:11.967132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffbf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.769 [2024-12-05 20:28:11.967161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.769 #27 NEW cov: 12388 ft: 14091 corp: 12/122b lim: 35 exec/s: 0 rss: 74Mb L: 12/15 MS: 1 ChangeBit- 00:08:18.769 [2024-12-05 20:28:12.017287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fff700c6 cdw11:5b00fff6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.769 [2024-12-05 20:28:12.017313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.769 #28 NEW cov: 12388 ft: 14129 corp: 13/129b lim: 35 exec/s: 28 rss: 74Mb L: 7/15 MS: 1 EraseBytes- 00:08:18.769 [2024-12-05 20:28:12.067491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff007dff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.769 [2024-12-05 20:28:12.067518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.769 #29 NEW cov: 12388 ft: 14146 corp: 14/139b lim: 35 exec/s: 29 rss: 74Mb L: 10/15 MS: 1 InsertByte- 00:08:18.769 [2024-12-05 20:28:12.137849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fff700c6 cdw11:5b00ffe6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:18.769 [2024-12-05 20:28:12.137877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.769 #30 NEW cov: 12388 ft: 14163 corp: 15/146b lim: 35 exec/s: 30 rss: 74Mb L: 7/15 MS: 1 ChangeBit- 00:08:19.025 [2024-12-05 20:28:12.208130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff005b cdw11:ff00ffbf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.025 [2024-12-05 20:28:12.208158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.025 #36 NEW cov: 12388 ft: 14183 corp: 16/158b lim: 35 exec/s: 36 rss: 75Mb L: 12/15 MS: 1 CopyPart- 00:08:19.025 [2024-12-05 20:28:12.278448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:4a00fff7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.025 [2024-12-05 20:28:12.278483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.025 #37 NEW cov: 12388 ft: 14187 corp: 17/167b lim: 35 exec/s: 37 rss: 75Mb L: 9/15 MS: 1 CrossOver- 00:08:19.025 [2024-12-05 20:28:12.328584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fffc00c6 cdw11:5b00ffe6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.025 [2024-12-05 20:28:12.328614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.025 #38 NEW cov: 12388 ft: 14224 corp: 18/174b lim: 35 exec/s: 38 rss: 75Mb L: 7/15 MS: 1 ChangeBinInt- 00:08:19.025 [2024-12-05 20:28:12.399207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffc600c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.025 [2024-12-05 20:28:12.399239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.025 [2024-12-05 20:28:12.399331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:f6f700ff cdw11:f6005bff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.025 [2024-12-05 20:28:12.399349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.025 #39 NEW cov: 12388 ft: 14340 corp: 19/190b lim: 35 exec/s: 39 rss: 75Mb L: 16/16 MS: 1 CrossOver- 00:08:19.025 [2024-12-05 20:28:12.449069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.025 [2024-12-05 20:28:12.449097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.281 #40 NEW cov: 12388 ft: 14348 corp: 20/201b lim: 35 exec/s: 40 rss: 75Mb L: 11/16 MS: 1 PersAutoDict- DE: "\377D"- 00:08:19.281 [2024-12-05 20:28:12.499150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.281 [2024-12-05 20:28:12.499180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.281 #41 NEW cov: 12388 ft: 14352 corp: 21/210b lim: 35 exec/s: 41 rss: 75Mb L: 9/16 MS: 1 CopyPart- 00:08:19.281 [2024-12-05 20:28:12.549565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:bf00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.281 [2024-12-05 20:28:12.549594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.281 [2024-12-05 20:28:12.549697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff005b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.281 [2024-12-05 20:28:12.549714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.281 #42 NEW cov: 12388 ft: 14397 corp: 22/225b lim: 35 exec/s: 42 rss: 75Mb L: 15/16 MS: 1 ChangeBit- 00:08:19.281 [2024-12-05 20:28:12.619522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff005b cdw11:ff00ffbf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.281 [2024-12-05 20:28:12.619552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.281 #43 NEW cov: 12388 ft: 14413 corp: 23/237b lim: 35 exec/s: 43 rss: 75Mb L: 12/16 MS: 1 ChangeBit- 00:08:19.281 [2024-12-05 20:28:12.689668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c6ff007f cdw11:e600f7ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.281 [2024-12-05 20:28:12.689696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.281 #44 NEW cov: 12388 ft: 14462 corp: 24/245b lim: 35 exec/s: 44 rss: 75Mb L: 8/16 MS: 1 InsertByte- 00:08:19.537 [2024-12-05 20:28:12.739930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4af600f7 cdw11:4a005b4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.537 [2024-12-05 20:28:12.739957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.537 #46 NEW cov: 12388 ft: 14480 corp: 25/254b lim: 35 exec/s: 46 rss: 75Mb L: 9/16 MS: 2 EraseBytes-CopyPart- 00:08:19.537 [2024-12-05 20:28:12.810210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.537 [2024-12-05 20:28:12.810236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.537 #47 NEW cov: 12388 ft: 14496 corp: 26/266b lim: 35 exec/s: 47 rss: 75Mb L: 12/16 MS: 1 InsertByte- 00:08:19.537 [2024-12-05 20:28:12.880384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:4af600d7 cdw11:4a005b4a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.537 [2024-12-05 20:28:12.880413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.537 #48 NEW cov: 12388 ft: 14511 corp: 27/275b lim: 35 exec/s: 48 rss: 75Mb L: 9/16 MS: 1 ChangeBit- 00:08:19.537 [2024-12-05 20:28:12.950751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00f7c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.537 [2024-12-05 20:28:12.950778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.794 #49 NEW cov: 12388 ft: 14530 corp: 28/283b lim: 35 exec/s: 49 rss: 75Mb L: 8/16 MS: 1 EraseBytes- 00:08:19.794 [2024-12-05 20:28:13.021002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00c6 cdw11:5b0044f6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:19.794 [2024-12-05 20:28:13.021032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.794 #50 NEW cov: 12388 ft: 14552 corp: 29/290b lim: 35 exec/s: 25 rss: 75Mb L: 7/16 MS: 1 EraseBytes- 00:08:19.794 #50 DONE cov: 12388 ft: 14552 corp: 29/290b lim: 35 exec/s: 25 rss: 75Mb 00:08:19.794 ###### Recommended dictionary. ###### 00:08:19.794 "\377D" # Uses: 2 00:08:19.794 "\377\377\377\377" # Uses: 1 00:08:19.794 ###### End of recommended dictionary. ###### 00:08:19.795 Done 50 runs in 2 second(s) 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:19.795 20:28:13 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:08:19.795 [2024-12-05 20:28:13.216629] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:19.795 [2024-12-05 20:28:13.216699] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840841 ] 00:08:20.358 [2024-12-05 20:28:13.524429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.358 [2024-12-05 20:28:13.582824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.358 [2024-12-05 20:28:13.642019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.358 [2024-12-05 20:28:13.658251] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:20.358 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.358 INFO: Seed: 1636370050 00:08:20.358 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:20.358 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:20.358 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:20.358 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.358 #2 INITED exec/s: 0 rss: 67Mb 00:08:20.358 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.358 This may also happen if the target rejected all inputs we tried so far 00:08:20.919 NEW_FUNC[1/705]: 0x440c78 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:20.919 NEW_FUNC[2/705]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.919 #4 NEW cov: 12046 ft: 12046 corp: 2/5b lim: 20 exec/s: 0 rss: 74Mb L: 4/4 MS: 2 CMP-CrossOver- DE: "\000\000"- 00:08:20.919 #10 NEW cov: 12159 ft: 12739 corp: 3/10b lim: 20 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:08:20.919 [2024-12-05 20:28:14.180669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:20.919 [2024-12-05 20:28:14.180716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.919 NEW_FUNC[1/17]: 0x137b7e8 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3484 00:08:20.919 NEW_FUNC[2/17]: 0x137c368 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3426 00:08:20.919 #11 NEW cov: 12420 ft: 13319 corp: 4/15b lim: 20 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:08:20.919 #12 NEW cov: 12505 ft: 13583 corp: 5/20b lim: 20 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:08:20.919 #13 NEW cov: 12505 ft: 13663 corp: 6/24b lim: 20 exec/s: 0 rss: 74Mb L: 4/5 MS: 1 ChangeBit- 00:08:21.175 [2024-12-05 20:28:14.361529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.175 [2024-12-05 20:28:14.361571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.175 #14 NEW cov: 12505 ft: 13794 corp: 7/29b lim: 20 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:08:21.175 #15 NEW cov: 12522 ft: 14268 corp: 8/46b lim: 20 exec/s: 0 rss: 74Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:08:21.175 #17 NEW cov: 12522 ft: 14287 corp: 9/50b lim: 20 exec/s: 0 rss: 74Mb L: 4/17 MS: 2 EraseBytes-InsertByte- 00:08:21.175 #23 NEW cov: 12522 ft: 14337 corp: 10/55b lim: 20 exec/s: 0 rss: 74Mb L: 5/17 MS: 1 CopyPart- 00:08:21.431 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:21.431 #24 NEW cov: 12545 ft: 14439 corp: 11/59b lim: 20 exec/s: 0 rss: 74Mb L: 4/17 MS: 1 ChangeBit- 00:08:21.431 #25 NEW cov: 12550 ft: 14671 corp: 12/68b lim: 20 exec/s: 0 rss: 74Mb L: 9/17 MS: 1 EraseBytes- 00:08:21.431 [2024-12-05 20:28:14.723789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.431 [2024-12-05 20:28:14.723820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.431 NEW_FUNC[1/3]: 0x14f30c8 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:819 00:08:21.431 NEW_FUNC[2/3]: 0x1519628 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3687 00:08:21.431 #26 NEW cov: 12634 ft: 14905 corp: 13/82b lim: 20 exec/s: 26 rss: 75Mb L: 14/17 MS: 1 InsertRepeatedBytes- 00:08:21.431 [2024-12-05 20:28:14.804335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.431 [2024-12-05 20:28:14.804366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.431 #27 NEW cov: 12634 ft: 14947 corp: 14/101b lim: 20 exec/s: 27 rss: 75Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:08:21.687 #28 NEW cov: 12637 ft: 15012 corp: 15/106b lim: 20 exec/s: 28 rss: 75Mb L: 5/19 MS: 1 CMP- DE: "\377\013"- 00:08:21.687 [2024-12-05 20:28:14.944227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.687 [2024-12-05 20:28:14.944256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.687 #29 NEW cov: 12637 ft: 15030 corp: 16/111b lim: 20 exec/s: 29 rss: 75Mb L: 5/19 MS: 1 CopyPart- 00:08:21.687 #30 NEW cov: 12637 ft: 15066 corp: 17/115b lim: 20 exec/s: 30 rss: 75Mb L: 4/19 MS: 1 ChangeBit- 00:08:21.687 #31 NEW cov: 12637 ft: 15099 corp: 18/128b lim: 20 exec/s: 31 rss: 75Mb L: 13/19 MS: 1 InsertRepeatedBytes- 00:08:21.943 #32 NEW cov: 12637 ft: 15122 corp: 19/133b lim: 20 exec/s: 32 rss: 75Mb L: 5/19 MS: 1 CrossOver- 00:08:21.943 #37 NEW cov: 12637 ft: 15197 corp: 20/150b lim: 20 exec/s: 37 rss: 75Mb L: 17/19 MS: 5 EraseBytes-ChangeBinInt-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:08:21.943 [2024-12-05 20:28:15.245605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.943 [2024-12-05 20:28:15.245640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.943 #38 NEW cov: 12637 ft: 15228 corp: 21/158b lim: 20 exec/s: 38 rss: 75Mb L: 8/19 MS: 1 InsertRepeatedBytes- 00:08:21.943 [2024-12-05 20:28:15.296114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.943 [2024-12-05 20:28:15.296144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.943 #39 NEW cov: 12637 ft: 15238 corp: 22/173b lim: 20 exec/s: 39 rss: 75Mb L: 15/19 MS: 1 InsertByte- 00:08:21.943 [2024-12-05 20:28:15.346535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.943 [2024-12-05 20:28:15.346568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.199 #40 NEW cov: 12637 ft: 15253 corp: 23/192b lim: 20 exec/s: 40 rss: 75Mb L: 19/19 MS: 1 CrossOver- 00:08:22.199 #41 NEW cov: 12637 ft: 15269 corp: 24/196b lim: 20 exec/s: 41 rss: 75Mb L: 4/19 MS: 1 EraseBytes- 00:08:22.199 #42 NEW cov: 12637 ft: 15301 corp: 25/216b lim: 20 exec/s: 42 rss: 75Mb L: 20/20 MS: 1 CopyPart- 00:08:22.199 #43 NEW cov: 12637 ft: 15310 corp: 26/234b lim: 20 exec/s: 43 rss: 75Mb L: 18/20 MS: 1 CopyPart- 00:08:22.199 [2024-12-05 20:28:15.617290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:22.199 [2024-12-05 20:28:15.617327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.457 #44 NEW cov: 12637 ft: 15319 corp: 27/249b lim: 20 exec/s: 44 rss: 75Mb L: 15/20 MS: 1 ShuffleBytes- 00:08:22.457 [2024-12-05 20:28:15.687760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:22.457 [2024-12-05 20:28:15.687787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.457 #45 NEW cov: 12637 ft: 15373 corp: 28/264b lim: 20 exec/s: 22 rss: 75Mb L: 15/20 MS: 1 InsertByte- 00:08:22.457 #45 DONE cov: 12637 ft: 15373 corp: 28/264b lim: 20 exec/s: 22 rss: 75Mb 00:08:22.457 ###### Recommended dictionary. ###### 00:08:22.457 "\000\000" # Uses: 4 00:08:22.457 "\377\013" # Uses: 0 00:08:22.457 ###### End of recommended dictionary. ###### 00:08:22.457 Done 45 runs in 2 second(s) 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:22.457 20:28:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:08:22.457 [2024-12-05 20:28:15.867638] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:22.457 [2024-12-05 20:28:15.867711] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841213 ] 00:08:23.021 [2024-12-05 20:28:16.180034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.022 [2024-12-05 20:28:16.239274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.022 [2024-12-05 20:28:16.298333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.022 [2024-12-05 20:28:16.314573] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:08:23.022 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.022 INFO: Seed: 4292364643 00:08:23.022 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:23.022 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:23.022 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:23.022 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.022 #2 INITED exec/s: 0 rss: 67Mb 00:08:23.022 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.022 This may also happen if the target rejected all inputs we tried so far 00:08:23.022 [2024-12-05 20:28:16.363840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.022 [2024-12-05 20:28:16.363871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.022 [2024-12-05 20:28:16.363929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.022 [2024-12-05 20:28:16.363943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.022 [2024-12-05 20:28:16.363997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.022 [2024-12-05 20:28:16.364011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.022 [2024-12-05 20:28:16.364062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.022 [2024-12-05 20:28:16.364076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.280 NEW_FUNC[1/717]: 0x441d78 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:08:23.280 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:23.280 #10 NEW cov: 12172 ft: 12172 corp: 2/33b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:23.280 [2024-12-05 20:28:16.714815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.280 [2024-12-05 20:28:16.714864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.280 [2024-12-05 20:28:16.714929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.280 [2024-12-05 20:28:16.714948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.280 [2024-12-05 20:28:16.715011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.280 [2024-12-05 20:28:16.715029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.280 [2024-12-05 20:28:16.715091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.280 [2024-12-05 20:28:16.715109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.538 #11 NEW cov: 12286 ft: 12825 corp: 3/65b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:23.538 [2024-12-05 20:28:16.774771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.774803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.774859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.774873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.774926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.774940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.774994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.775007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.538 #12 NEW cov: 12292 ft: 12988 corp: 4/97b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:23.538 [2024-12-05 20:28:16.834886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.834915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.834971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.834985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.835039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.835053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.835105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.835118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.538 #13 NEW cov: 12377 ft: 13302 corp: 5/129b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:23.538 [2024-12-05 20:28:16.875055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.875080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.875137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.875151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.875205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.875219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.875273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.875290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.538 #14 NEW cov: 12377 ft: 13373 corp: 6/161b lim: 35 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:23.538 [2024-12-05 20:28:16.914837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:d2d20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.914862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.538 [2024-12-05 20:28:16.914917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2300000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.538 [2024-12-05 20:28:16.914931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.538 #18 NEW cov: 12377 ft: 13816 corp: 7/175b lim: 35 exec/s: 0 rss: 74Mb L: 14/32 MS: 4 CopyPart-ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:23.538 [2024-12-05 20:28:16.955262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.539 [2024-12-05 20:28:16.955287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.539 [2024-12-05 20:28:16.955343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.539 [2024-12-05 20:28:16.955357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.539 [2024-12-05 20:28:16.955412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.539 [2024-12-05 20:28:16.955426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.539 [2024-12-05 20:28:16.955480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.539 [2024-12-05 20:28:16.955494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.797 #19 NEW cov: 12377 ft: 13856 corp: 8/209b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:08:23.797 [2024-12-05 20:28:16.995382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:16.995408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:16.995480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:16.995495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:16.995549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:16.995563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:16.995618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:16.995632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.797 #20 NEW cov: 12377 ft: 13917 corp: 9/241b lim: 35 exec/s: 0 rss: 74Mb L: 32/34 MS: 1 CMP- DE: "\002\000\000\000"- 00:08:23.797 [2024-12-05 20:28:17.055547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.055573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.055631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.055645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.055700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.055713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.055786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.055801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.797 #21 NEW cov: 12377 ft: 13945 corp: 10/273b lim: 35 exec/s: 0 rss: 74Mb L: 32/34 MS: 1 CopyPart- 00:08:23.797 [2024-12-05 20:28:17.115670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.115695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.115768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.115784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.115850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.115864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.115917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.115931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.797 #22 NEW cov: 12377 ft: 14032 corp: 11/305b lim: 35 exec/s: 0 rss: 74Mb L: 32/34 MS: 1 ChangeBit- 00:08:23.797 [2024-12-05 20:28:17.155788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.155813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.155884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.155899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.155954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.155968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.156021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.156038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:23.797 #23 NEW cov: 12377 ft: 14073 corp: 12/338b lim: 35 exec/s: 0 rss: 74Mb L: 33/34 MS: 1 CopyPart- 00:08:23.797 [2024-12-05 20:28:17.215957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.215982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.216040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.216054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.216108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.216122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:23.797 [2024-12-05 20:28:17.216174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:23.797 [2024-12-05 20:28:17.216188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.055 #24 NEW cov: 12377 ft: 14091 corp: 13/370b lim: 35 exec/s: 0 rss: 74Mb L: 32/34 MS: 1 ChangeByte- 00:08:24.055 [2024-12-05 20:28:17.256090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.256116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.055 [2024-12-05 20:28:17.256188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.256203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.055 [2024-12-05 20:28:17.256259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.256273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.055 [2024-12-05 20:28:17.256328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.256342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.055 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:24.055 #25 NEW cov: 12400 ft: 14174 corp: 14/403b lim: 35 exec/s: 0 rss: 75Mb L: 33/34 MS: 1 CopyPart- 00:08:24.055 [2024-12-05 20:28:17.315914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.315939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.055 [2024-12-05 20:28:17.316010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.316025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.055 #28 NEW cov: 12400 ft: 14198 corp: 15/418b lim: 35 exec/s: 0 rss: 75Mb L: 15/34 MS: 3 ChangeByte-ChangeBinInt-CrossOver- 00:08:24.055 [2024-12-05 20:28:17.356014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:d2d20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.356039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.055 [2024-12-05 20:28:17.356110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2300000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.356125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.055 #29 NEW cov: 12400 ft: 14215 corp: 16/432b lim: 35 exec/s: 29 rss: 75Mb L: 14/34 MS: 1 ShuffleBytes- 00:08:24.055 [2024-12-05 20:28:17.416517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.416542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.055 [2024-12-05 20:28:17.416598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.416611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.055 [2024-12-05 20:28:17.416666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.055 [2024-12-05 20:28:17.416680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.056 [2024-12-05 20:28:17.416729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00250000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.056 [2024-12-05 20:28:17.416747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.056 #30 NEW cov: 12400 ft: 14223 corp: 17/461b lim: 35 exec/s: 30 rss: 75Mb L: 29/34 MS: 1 EraseBytes- 00:08:24.056 [2024-12-05 20:28:17.476390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000ce00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.056 [2024-12-05 20:28:17.476416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.056 [2024-12-05 20:28:17.476472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.056 [2024-12-05 20:28:17.476487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.313 #31 NEW cov: 12400 ft: 14287 corp: 18/476b lim: 35 exec/s: 31 rss: 75Mb L: 15/34 MS: 1 CopyPart- 00:08:24.313 [2024-12-05 20:28:17.536904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.536931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.537004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00270000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.537019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.537075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.537092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.537149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.537163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.313 #32 NEW cov: 12400 ft: 14297 corp: 19/509b lim: 35 exec/s: 32 rss: 75Mb L: 33/34 MS: 1 InsertByte- 00:08:24.313 [2024-12-05 20:28:17.577015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.577040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.577113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00100000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.577128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.577185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.577199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.577258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.577272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.313 #33 NEW cov: 12400 ft: 14307 corp: 20/541b lim: 35 exec/s: 33 rss: 75Mb L: 32/34 MS: 1 ChangeBit- 00:08:24.313 [2024-12-05 20:28:17.617133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.617158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.617228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.617242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.617296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.617310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.617364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00490000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.617378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.313 #34 NEW cov: 12400 ft: 14317 corp: 21/573b lim: 35 exec/s: 34 rss: 75Mb L: 32/34 MS: 1 ChangeByte- 00:08:24.313 [2024-12-05 20:28:17.657230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.657255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.657328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.657346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.657402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.657415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.657472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.657486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.313 #35 NEW cov: 12400 ft: 14348 corp: 22/605b lim: 35 exec/s: 35 rss: 75Mb L: 32/34 MS: 1 PersAutoDict- DE: "\002\000\000\000"- 00:08:24.313 [2024-12-05 20:28:17.697346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.697371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.313 [2024-12-05 20:28:17.697425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.313 [2024-12-05 20:28:17.697439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.314 [2024-12-05 20:28:17.697492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.314 [2024-12-05 20:28:17.697506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.314 [2024-12-05 20:28:17.697561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000010 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.314 [2024-12-05 20:28:17.697575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.314 #36 NEW cov: 12400 ft: 14354 corp: 23/637b lim: 35 exec/s: 36 rss: 75Mb L: 32/34 MS: 1 ChangeBit- 00:08:24.314 [2024-12-05 20:28:17.737430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.314 [2024-12-05 20:28:17.737457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.314 [2024-12-05 20:28:17.737526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.314 [2024-12-05 20:28:17.737541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.314 [2024-12-05 20:28:17.737596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.314 [2024-12-05 20:28:17.737610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.314 [2024-12-05 20:28:17.737665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.314 [2024-12-05 20:28:17.737678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.571 #37 NEW cov: 12400 ft: 14365 corp: 24/665b lim: 35 exec/s: 37 rss: 75Mb L: 28/34 MS: 1 EraseBytes- 00:08:24.571 [2024-12-05 20:28:17.777596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.777627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.777698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00270000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.777713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.777771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.777786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.777840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.777855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.572 #38 NEW cov: 12400 ft: 14410 corp: 25/698b lim: 35 exec/s: 38 rss: 75Mb L: 33/34 MS: 1 CopyPart- 00:08:24.572 [2024-12-05 20:28:17.837563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.837589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.837647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00270000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.837661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.837718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.837732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.572 #39 NEW cov: 12400 ft: 14629 corp: 26/720b lim: 35 exec/s: 39 rss: 75Mb L: 22/34 MS: 1 EraseBytes- 00:08:24.572 [2024-12-05 20:28:17.897864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00fb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.897890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.897962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.897978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.898033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.898046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.898101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.898115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.572 #40 NEW cov: 12400 ft: 14637 corp: 27/752b lim: 35 exec/s: 40 rss: 75Mb L: 32/34 MS: 1 ChangeBinInt- 00:08:24.572 [2024-12-05 20:28:17.937992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.938023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.938082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.938097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.938152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:002c0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.938166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.938220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00490000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.938234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.572 #41 NEW cov: 12400 ft: 14638 corp: 28/784b lim: 35 exec/s: 41 rss: 75Mb L: 32/34 MS: 1 ChangeByte- 00:08:24.572 [2024-12-05 20:28:17.998159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00fb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.998185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.998240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.998253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.998311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.998324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.572 [2024-12-05 20:28:17.998377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.572 [2024-12-05 20:28:17.998391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.830 #42 NEW cov: 12400 ft: 14665 corp: 29/816b lim: 35 exec/s: 42 rss: 75Mb L: 32/34 MS: 1 CrossOver- 00:08:24.830 [2024-12-05 20:28:18.058297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.058323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.058395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.058411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.058467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.058480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.058537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.058551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.830 #43 NEW cov: 12400 ft: 14697 corp: 30/848b lim: 35 exec/s: 43 rss: 75Mb L: 32/34 MS: 1 ShuffleBytes- 00:08:24.830 [2024-12-05 20:28:18.098078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.098103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.098176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.098191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.830 #44 NEW cov: 12400 ft: 14702 corp: 31/866b lim: 35 exec/s: 44 rss: 75Mb L: 18/34 MS: 1 EraseBytes- 00:08:24.830 [2024-12-05 20:28:18.158429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.158452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.158526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.158540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.158597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.158610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.830 #45 NEW cov: 12400 ft: 14713 corp: 32/892b lim: 35 exec/s: 45 rss: 75Mb L: 26/34 MS: 1 EraseBytes- 00:08:24.830 [2024-12-05 20:28:18.198719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00fb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.198748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.198832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.198846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.198903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.198916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.830 [2024-12-05 20:28:18.198972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000024 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.198985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.830 #46 NEW cov: 12400 ft: 14788 corp: 33/924b lim: 35 exec/s: 46 rss: 75Mb L: 32/34 MS: 1 ChangeByte- 00:08:24.830 [2024-12-05 20:28:18.239006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.830 [2024-12-05 20:28:18.239030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:24.831 [2024-12-05 20:28:18.239100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.831 [2024-12-05 20:28:18.239118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:24.831 [2024-12-05 20:28:18.239174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.831 [2024-12-05 20:28:18.239188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:24.831 [2024-12-05 20:28:18.239246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.831 [2024-12-05 20:28:18.239259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:24.831 [2024-12-05 20:28:18.239314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:24.831 [2024-12-05 20:28:18.239328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:24.831 #47 NEW cov: 12400 ft: 14842 corp: 34/959b lim: 35 exec/s: 47 rss: 75Mb L: 35/35 MS: 1 CrossOver- 00:08:25.089 [2024-12-05 20:28:18.278933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.089 [2024-12-05 20:28:18.278958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.089 [2024-12-05 20:28:18.279027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.089 [2024-12-05 20:28:18.279042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.089 [2024-12-05 20:28:18.279094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:10000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.089 [2024-12-05 20:28:18.279108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.089 [2024-12-05 20:28:18.279161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.089 [2024-12-05 20:28:18.279174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.089 #48 NEW cov: 12400 ft: 14891 corp: 35/992b lim: 35 exec/s: 48 rss: 75Mb L: 33/35 MS: 1 CopyPart- 00:08:25.089 [2024-12-05 20:28:18.339056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.089 [2024-12-05 20:28:18.339082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.089 [2024-12-05 20:28:18.339137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00100000 cdw11:007c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.089 [2024-12-05 20:28:18.339151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.089 [2024-12-05 20:28:18.339207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00100000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.089 [2024-12-05 20:28:18.339221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.090 [2024-12-05 20:28:18.339274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.090 [2024-12-05 20:28:18.339288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:25.090 #49 NEW cov: 12400 ft: 14897 corp: 36/1026b lim: 35 exec/s: 24 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:08:25.090 #49 DONE cov: 12400 ft: 14897 corp: 36/1026b lim: 35 exec/s: 24 rss: 75Mb 00:08:25.090 ###### Recommended dictionary. ###### 00:08:25.090 "\002\000\000\000" # Uses: 1 00:08:25.090 ###### End of recommended dictionary. ###### 00:08:25.090 Done 49 runs in 2 second(s) 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:25.090 20:28:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:08:25.349 [2024-12-05 20:28:18.545159] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:25.349 [2024-12-05 20:28:18.545249] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841578 ] 00:08:25.607 [2024-12-05 20:28:18.843736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.607 [2024-12-05 20:28:18.902650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.607 [2024-12-05 20:28:18.961977] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.607 [2024-12-05 20:28:18.978228] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:08:25.607 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.607 INFO: Seed: 2661397495 00:08:25.607 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:25.607 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:25.607 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:25.607 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.607 #2 INITED exec/s: 0 rss: 67Mb 00:08:25.607 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.607 This may also happen if the target rejected all inputs we tried so far 00:08:25.607 [2024-12-05 20:28:19.027908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:adad250a cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.607 [2024-12-05 20:28:19.027943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:25.607 [2024-12-05 20:28:19.028014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.607 [2024-12-05 20:28:19.028029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:25.607 [2024-12-05 20:28:19.028085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.607 [2024-12-05 20:28:19.028099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:25.607 [2024-12-05 20:28:19.028153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.607 [2024-12-05 20:28:19.028166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.123 NEW_FUNC[1/717]: 0x443f18 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:08:26.123 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:26.123 #14 NEW cov: 12184 ft: 12183 corp: 2/43b lim: 45 exec/s: 0 rss: 74Mb L: 42/42 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:26.123 [2024-12-05 20:28:19.368265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0aad0a25 cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.123 [2024-12-05 20:28:19.368306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.123 [2024-12-05 20:28:19.368358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.123 [2024-12-05 20:28:19.368371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.123 #15 NEW cov: 12297 ft: 13256 corp: 3/66b lim: 45 exec/s: 0 rss: 74Mb L: 23/42 MS: 1 CrossOver- 00:08:26.123 [2024-12-05 20:28:19.408409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.123 [2024-12-05 20:28:19.408438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.123 [2024-12-05 20:28:19.408491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.123 [2024-12-05 20:28:19.408505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.123 [2024-12-05 20:28:19.408555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.123 [2024-12-05 20:28:19.408569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.123 #16 NEW cov: 12303 ft: 13581 corp: 4/96b lim: 45 exec/s: 0 rss: 74Mb L: 30/42 MS: 1 InsertRepeatedBytes- 00:08:26.123 [2024-12-05 20:28:19.468563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.123 [2024-12-05 20:28:19.468589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.123 [2024-12-05 20:28:19.468641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.123 [2024-12-05 20:28:19.468659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.123 [2024-12-05 20:28:19.468708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ad43adad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.124 [2024-12-05 20:28:19.468721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.124 #17 NEW cov: 12388 ft: 13851 corp: 5/126b lim: 45 exec/s: 0 rss: 74Mb L: 30/42 MS: 1 ChangeByte- 00:08:26.124 [2024-12-05 20:28:19.528716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.124 [2024-12-05 20:28:19.528740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.124 [2024-12-05 20:28:19.528811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.124 [2024-12-05 20:28:19.528825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.124 [2024-12-05 20:28:19.528875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.124 [2024-12-05 20:28:19.528888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.382 #18 NEW cov: 12388 ft: 13919 corp: 6/156b lim: 45 exec/s: 0 rss: 74Mb L: 30/42 MS: 1 CrossOver- 00:08:26.382 [2024-12-05 20:28:19.588616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a250a cdw11:0aad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.588642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.382 #21 NEW cov: 12388 ft: 14646 corp: 7/170b lim: 45 exec/s: 0 rss: 74Mb L: 14/42 MS: 3 CrossOver-CopyPart-CopyPart- 00:08:26.382 [2024-12-05 20:28:19.649053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.649079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.649145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.649159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.649211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.649225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.382 #22 NEW cov: 12388 ft: 14754 corp: 8/200b lim: 45 exec/s: 0 rss: 74Mb L: 30/42 MS: 1 ChangeBit- 00:08:26.382 [2024-12-05 20:28:19.689326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000250a cdw11:00250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.689351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.689404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.689417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.689470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:2dad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.689483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.689533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.689546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.382 #23 NEW cov: 12388 ft: 14802 corp: 9/242b lim: 45 exec/s: 0 rss: 74Mb L: 42/42 MS: 1 CrossOver- 00:08:26.382 [2024-12-05 20:28:19.729425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.729450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.729500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:ad4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.729513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.729564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4a4a4a4a cdw11:4a4a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.729578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.729627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.729640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.382 #24 NEW cov: 12388 ft: 14832 corp: 10/282b lim: 45 exec/s: 0 rss: 74Mb L: 40/42 MS: 1 InsertRepeatedBytes- 00:08:26.382 [2024-12-05 20:28:19.789461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.789486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.789553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.789566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.382 [2024-12-05 20:28:19.789618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-12-05 20:28:19.789632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.382 #25 NEW cov: 12388 ft: 14959 corp: 11/312b lim: 45 exec/s: 0 rss: 74Mb L: 30/42 MS: 1 ShuffleBytes- 00:08:26.642 [2024-12-05 20:28:19.829557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.829583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.829651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.829665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.829721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.829735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.642 #26 NEW cov: 12388 ft: 15079 corp: 12/346b lim: 45 exec/s: 0 rss: 74Mb L: 34/42 MS: 1 CrossOver- 00:08:26.642 [2024-12-05 20:28:19.869542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a250a cdw11:0aad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.869567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.869634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0aad0aad cdw11:ad0a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.869649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.642 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:26.642 #27 NEW cov: 12411 ft: 15091 corp: 13/365b lim: 45 exec/s: 0 rss: 75Mb L: 19/42 MS: 1 CopyPart- 00:08:26.642 [2024-12-05 20:28:19.929855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.929880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.929947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0025a100 cdw11:0aad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.929962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.930017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.930030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.642 #33 NEW cov: 12411 ft: 15124 corp: 14/399b lim: 45 exec/s: 0 rss: 75Mb L: 34/42 MS: 1 InsertRepeatedBytes- 00:08:26.642 [2024-12-05 20:28:19.970094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.970118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.970188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:ad4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.970202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.970256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4a4ab5b3 cdw11:4a4a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.970269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:19.970320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:19.970334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.642 #34 NEW cov: 12411 ft: 15199 corp: 15/439b lim: 45 exec/s: 34 rss: 75Mb L: 40/42 MS: 1 ChangeBinInt- 00:08:26.642 [2024-12-05 20:28:20.030329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:20.030362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:20.030433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:ad4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:20.030448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:20.030503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4a4ab5b3 cdw11:4a4a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:20.030517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.642 [2024-12-05 20:28:20.030570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.642 [2024-12-05 20:28:20.030583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:26.642 #35 NEW cov: 12411 ft: 15266 corp: 16/479b lim: 45 exec/s: 35 rss: 75Mb L: 40/42 MS: 1 CopyPart- 00:08:26.901 [2024-12-05 20:28:20.090329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.090361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.901 [2024-12-05 20:28:20.090415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.090429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.901 [2024-12-05 20:28:20.090481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.090495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.901 #36 NEW cov: 12411 ft: 15274 corp: 17/509b lim: 45 exec/s: 36 rss: 75Mb L: 30/42 MS: 1 ChangeBinInt- 00:08:26.901 [2024-12-05 20:28:20.130247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.130275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.901 [2024-12-05 20:28:20.130328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.130343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.901 #37 NEW cov: 12411 ft: 15287 corp: 18/529b lim: 45 exec/s: 37 rss: 75Mb L: 20/42 MS: 1 EraseBytes- 00:08:26.901 [2024-12-05 20:28:20.190245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a250a cdw11:ad0a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.190271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.901 #38 NEW cov: 12411 ft: 15319 corp: 19/539b lim: 45 exec/s: 38 rss: 75Mb L: 10/42 MS: 1 EraseBytes- 00:08:26.901 [2024-12-05 20:28:20.230546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a250a cdw11:0aad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.230571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.901 [2024-12-05 20:28:20.230626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0aad0aad cdw11:4ff50002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.230639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.901 #39 NEW cov: 12411 ft: 15370 corp: 20/558b lim: 45 exec/s: 39 rss: 75Mb L: 19/42 MS: 1 ChangeBinInt- 00:08:26.901 [2024-12-05 20:28:20.290871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.901 [2024-12-05 20:28:20.290896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:26.902 [2024-12-05 20:28:20.290949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:4c520002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.902 [2024-12-05 20:28:20.290963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:26.902 [2024-12-05 20:28:20.291014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:52ad5252 cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.902 [2024-12-05 20:28:20.291028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:26.902 #40 NEW cov: 12411 ft: 15375 corp: 21/588b lim: 45 exec/s: 40 rss: 75Mb L: 30/42 MS: 1 ChangeBinInt- 00:08:26.902 [2024-12-05 20:28:20.330650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3a25250a cdw11:0a0a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.902 [2024-12-05 20:28:20.330676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.160 #41 NEW cov: 12411 ft: 15408 corp: 22/603b lim: 45 exec/s: 41 rss: 75Mb L: 15/42 MS: 1 InsertByte- 00:08:27.160 [2024-12-05 20:28:20.370943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a250a cdw11:0aad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.160 [2024-12-05 20:28:20.370967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.160 [2024-12-05 20:28:20.371037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ad0aad0a cdw11:ad4f0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.160 [2024-12-05 20:28:20.371051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.160 #47 NEW cov: 12411 ft: 15453 corp: 23/623b lim: 45 exec/s: 47 rss: 75Mb L: 20/42 MS: 1 InsertByte- 00:08:27.160 [2024-12-05 20:28:20.431432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.160 [2024-12-05 20:28:20.431457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.431509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:ad4a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.431523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.431573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:4a4ab5b3 cdw11:4a4a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.431587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.431636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.431652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:27.161 #48 NEW cov: 12411 ft: 15459 corp: 24/663b lim: 45 exec/s: 48 rss: 75Mb L: 40/42 MS: 1 CopyPart- 00:08:27.161 [2024-12-05 20:28:20.491437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.491462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.491516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.491530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.491583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ad43adad cdw11:ad7a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.491597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.161 #49 NEW cov: 12411 ft: 15497 corp: 25/693b lim: 45 exec/s: 49 rss: 75Mb L: 30/42 MS: 1 ChangeByte- 00:08:27.161 [2024-12-05 20:28:20.531524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.531550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.531603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.531617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.531668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.531682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.161 #50 NEW cov: 12411 ft: 15521 corp: 26/723b lim: 45 exec/s: 50 rss: 75Mb L: 30/42 MS: 1 ChangeBinInt- 00:08:27.161 [2024-12-05 20:28:20.591688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.591713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.591786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d1d1d1d1 cdw11:d10a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.591801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.161 [2024-12-05 20:28:20.591851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.161 [2024-12-05 20:28:20.591864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.419 #51 NEW cov: 12411 ft: 15563 corp: 27/758b lim: 45 exec/s: 51 rss: 75Mb L: 35/42 MS: 1 InsertRepeatedBytes- 00:08:27.419 [2024-12-05 20:28:20.631930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.631956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.419 [2024-12-05 20:28:20.632006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.632023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.419 [2024-12-05 20:28:20.632072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.632085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.419 [2024-12-05 20:28:20.632133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.632146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:27.419 #52 NEW cov: 12411 ft: 15620 corp: 28/801b lim: 45 exec/s: 52 rss: 75Mb L: 43/43 MS: 1 InsertRepeatedBytes- 00:08:27.419 [2024-12-05 20:28:20.671757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a250a cdw11:0a250000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.671782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.419 [2024-12-05 20:28:20.671849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ad0aadad cdw11:ad0a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.671863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.419 #53 NEW cov: 12411 ft: 15627 corp: 29/825b lim: 45 exec/s: 53 rss: 75Mb L: 24/43 MS: 1 CrossOver- 00:08:27.419 [2024-12-05 20:28:20.711971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.712000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.419 [2024-12-05 20:28:20.712070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.712086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.419 #54 NEW cov: 12411 ft: 15645 corp: 30/843b lim: 45 exec/s: 54 rss: 75Mb L: 18/43 MS: 1 EraseBytes- 00:08:27.419 [2024-12-05 20:28:20.752111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:2d0a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-12-05 20:28:20.752137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.419 [2024-12-05 20:28:20.752189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.420 [2024-12-05 20:28:20.752203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.420 [2024-12-05 20:28:20.752254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.420 [2024-12-05 20:28:20.752284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.420 #55 NEW cov: 12411 ft: 15648 corp: 31/871b lim: 45 exec/s: 55 rss: 75Mb L: 28/43 MS: 1 CrossOver- 00:08:27.420 [2024-12-05 20:28:20.791971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:250a250a cdw11:ad0a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.420 [2024-12-05 20:28:20.791996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.420 #56 NEW cov: 12411 ft: 15658 corp: 32/880b lim: 45 exec/s: 56 rss: 75Mb L: 9/43 MS: 1 EraseBytes- 00:08:27.420 [2024-12-05 20:28:20.852462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.420 [2024-12-05 20:28:20.852487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.420 [2024-12-05 20:28:20.852540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00010a00 cdw11:9bad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.420 [2024-12-05 20:28:20.852554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.420 [2024-12-05 20:28:20.852603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ad43adad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.420 [2024-12-05 20:28:20.852616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.679 #57 NEW cov: 12411 ft: 15668 corp: 33/910b lim: 45 exec/s: 57 rss: 75Mb L: 30/43 MS: 1 CMP- DE: "\000\000\001\233"- 00:08:27.679 [2024-12-05 20:28:20.892526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.892551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.679 [2024-12-05 20:28:20.892603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.892617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.679 [2024-12-05 20:28:20.892665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:adadadad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.892679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.679 #58 NEW cov: 12411 ft: 15674 corp: 34/940b lim: 45 exec/s: 58 rss: 75Mb L: 30/43 MS: 1 ChangeByte- 00:08:27.679 [2024-12-05 20:28:20.932592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00002d00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.932617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.679 [2024-12-05 20:28:20.932670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:adad0aad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.932684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.679 [2024-12-05 20:28:20.932734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ad43adad cdw11:adad0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.932753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:27.679 #59 NEW cov: 12411 ft: 15681 corp: 35/970b lim: 45 exec/s: 59 rss: 75Mb L: 30/43 MS: 1 ChangeByte- 00:08:27.679 [2024-12-05 20:28:20.972566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000800 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.972591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:27.679 [2024-12-05 20:28:20.972655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:d1d1d1d1 cdw11:d10a0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-12-05 20:28:20.972669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:27.679 #60 NEW cov: 12411 ft: 15691 corp: 36/988b lim: 45 exec/s: 30 rss: 75Mb L: 18/43 MS: 1 CrossOver- 00:08:27.679 #60 DONE cov: 12411 ft: 15691 corp: 36/988b lim: 45 exec/s: 30 rss: 75Mb 00:08:27.679 ###### Recommended dictionary. ###### 00:08:27.679 "\000\000\001\233" # Uses: 0 00:08:27.680 ###### End of recommended dictionary. ###### 00:08:27.680 Done 60 runs in 2 second(s) 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:27.939 20:28:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:08:27.939 [2024-12-05 20:28:21.181243] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:27.940 [2024-12-05 20:28:21.181320] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841952 ] 00:08:28.199 [2024-12-05 20:28:21.493848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.199 [2024-12-05 20:28:21.545292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.199 [2024-12-05 20:28:21.604450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.199 [2024-12-05 20:28:21.620699] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:08:28.199 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.199 INFO: Seed: 1010450285 00:08:28.458 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:28.458 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:28.458 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:28.458 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.458 #2 INITED exec/s: 0 rss: 67Mb 00:08:28.458 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.458 This may also happen if the target rejected all inputs we tried so far 00:08:28.458 [2024-12-05 20:28:21.676537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.458 [2024-12-05 20:28:21.676571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.458 [2024-12-05 20:28:21.676639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.458 [2024-12-05 20:28:21.676654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.458 [2024-12-05 20:28:21.676704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.458 [2024-12-05 20:28:21.676718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.458 [2024-12-05 20:28:21.676773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.458 [2024-12-05 20:28:21.676786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.458 [2024-12-05 20:28:21.676837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.458 [2024-12-05 20:28:21.676850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:28.716 NEW_FUNC[1/715]: 0x446728 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:08:28.716 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:28.716 #6 NEW cov: 12101 ft: 12100 corp: 2/11b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 4 ShuffleBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:08:28.716 [2024-12-05 20:28:22.016956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.716 [2024-12-05 20:28:22.016999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.716 #8 NEW cov: 12214 ft: 13065 corp: 3/14b lim: 10 exec/s: 0 rss: 74Mb L: 3/10 MS: 2 ChangeByte-CMP- DE: "\377\377"- 00:08:28.716 [2024-12-05 20:28:22.057343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.716 [2024-12-05 20:28:22.057371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.716 [2024-12-05 20:28:22.057420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.716 [2024-12-05 20:28:22.057435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.716 [2024-12-05 20:28:22.057484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.716 [2024-12-05 20:28:22.057498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.716 [2024-12-05 20:28:22.057546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.716 [2024-12-05 20:28:22.057559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.716 [2024-12-05 20:28:22.057607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.716 [2024-12-05 20:28:22.057621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:28.716 #9 NEW cov: 12220 ft: 13255 corp: 4/24b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:08:28.716 [2024-12-05 20:28:22.117504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.717 [2024-12-05 20:28:22.117530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.717 [2024-12-05 20:28:22.117578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.717 [2024-12-05 20:28:22.117592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.717 [2024-12-05 20:28:22.117641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.717 [2024-12-05 20:28:22.117655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.717 [2024-12-05 20:28:22.117701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.717 [2024-12-05 20:28:22.117714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.717 [2024-12-05 20:28:22.117766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000ff cdw11:00000000 00:08:28.717 [2024-12-05 20:28:22.117779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:28.717 #10 NEW cov: 12305 ft: 13613 corp: 5/34b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:08:28.975 [2024-12-05 20:28:22.157288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.157315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.157364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.157378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.975 #11 NEW cov: 12305 ft: 13901 corp: 6/39b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 EraseBytes- 00:08:28.975 [2024-12-05 20:28:22.197596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.197622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.197672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.197686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.197736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.197755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.197804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.197817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.975 #12 NEW cov: 12305 ft: 13973 corp: 7/47b lim: 10 exec/s: 0 rss: 74Mb L: 8/10 MS: 1 EraseBytes- 00:08:28.975 [2024-12-05 20:28:22.257874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.257900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.257949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.257966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.258015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.258028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.258076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.258089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.258137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.258150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:28.975 #13 NEW cov: 12305 ft: 14028 corp: 8/57b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 PersAutoDict- DE: "\377\377"- 00:08:28.975 [2024-12-05 20:28:22.318047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.318072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.318124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.318138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.975 [2024-12-05 20:28:22.318186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.975 [2024-12-05 20:28:22.318216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:28.976 [2024-12-05 20:28:22.318267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.976 [2024-12-05 20:28:22.318280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:28.976 [2024-12-05 20:28:22.318326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.976 [2024-12-05 20:28:22.318339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:28.976 #14 NEW cov: 12305 ft: 14059 corp: 9/67b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:08:28.976 [2024-12-05 20:28:22.357834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031f7 cdw11:00000000 00:08:28.976 [2024-12-05 20:28:22.357858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.976 [2024-12-05 20:28:22.357907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:28.976 [2024-12-05 20:28:22.357921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.976 #15 NEW cov: 12305 ft: 14191 corp: 10/72b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 ChangeBit- 00:08:29.252 [2024-12-05 20:28:22.418008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff31 cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.418033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.418083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.418100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.252 #16 NEW cov: 12305 ft: 14249 corp: 11/77b lim: 10 exec/s: 0 rss: 74Mb L: 5/10 MS: 1 ShuffleBytes- 00:08:29.252 [2024-12-05 20:28:22.458307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.458331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.458395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.458409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.458459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000030ff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.458472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.458520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.458533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.252 #17 NEW cov: 12305 ft: 14277 corp: 12/85b lim: 10 exec/s: 0 rss: 75Mb L: 8/10 MS: 1 ChangeASCIIInt- 00:08:29.252 [2024-12-05 20:28:22.518620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.518646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.518695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.518709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.518765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.518779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.518827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.518840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.518888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.518901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:29.252 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:29.252 #18 NEW cov: 12328 ft: 14311 corp: 13/95b lim: 10 exec/s: 0 rss: 75Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:29.252 [2024-12-05 20:28:22.578671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.578696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.578768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.578783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.578832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a530 cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.578849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.578899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.578913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.252 #19 NEW cov: 12328 ft: 14356 corp: 14/104b lim: 10 exec/s: 0 rss: 75Mb L: 9/10 MS: 1 InsertByte- 00:08:29.252 [2024-12-05 20:28:22.638853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.638878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.638928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.638942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.638991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a530 cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.639004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.252 [2024-12-05 20:28:22.639052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.252 [2024-12-05 20:28:22.639065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.252 #20 NEW cov: 12328 ft: 14373 corp: 15/113b lim: 10 exec/s: 20 rss: 75Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:29.511 [2024-12-05 20:28:22.699140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.511 [2024-12-05 20:28:22.699166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.511 [2024-12-05 20:28:22.699217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.511 [2024-12-05 20:28:22.699230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.511 [2024-12-05 20:28:22.699278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.511 [2024-12-05 20:28:22.699291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.511 [2024-12-05 20:28:22.699338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.511 [2024-12-05 20:28:22.699351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.511 [2024-12-05 20:28:22.699397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.511 [2024-12-05 20:28:22.699410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:29.512 #21 NEW cov: 12328 ft: 14401 corp: 16/123b lim: 10 exec/s: 21 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:08:29.512 [2024-12-05 20:28:22.739114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.739140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.739188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.739201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.739251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000030ff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.739265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.739314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.739327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.512 #22 NEW cov: 12328 ft: 14410 corp: 17/131b lim: 10 exec/s: 22 rss: 75Mb L: 8/10 MS: 1 CopyPart- 00:08:29.512 [2024-12-05 20:28:22.779327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.779351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.779416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.779430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.779480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.779493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.779540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.779553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.779602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000ff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.779615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:29.512 #23 NEW cov: 12328 ft: 14467 corp: 18/141b lim: 10 exec/s: 23 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:08:29.512 [2024-12-05 20:28:22.819421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031fb cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.819445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.819510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.819524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.819573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.819586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.819636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.819649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.819697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000ff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.819711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:29.512 #24 NEW cov: 12328 ft: 14510 corp: 19/151b lim: 10 exec/s: 24 rss: 75Mb L: 10/10 MS: 1 ChangeBit- 00:08:29.512 [2024-12-05 20:28:22.879561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.879590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.879654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.879668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.879718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.879732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.879785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.879799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.512 #25 NEW cov: 12328 ft: 14524 corp: 20/160b lim: 10 exec/s: 25 rss: 75Mb L: 9/10 MS: 1 EraseBytes- 00:08:29.512 [2024-12-05 20:28:22.919582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.919607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.919673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff73 cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.919687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.512 [2024-12-05 20:28:22.919737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.512 [2024-12-05 20:28:22.919755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.513 [2024-12-05 20:28:22.919804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.513 [2024-12-05 20:28:22.919817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.771 #26 NEW cov: 12328 ft: 14554 corp: 21/169b lim: 10 exec/s: 26 rss: 75Mb L: 9/10 MS: 1 ChangeByte- 00:08:29.771 [2024-12-05 20:28:22.979794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffa5 cdw11:00000000 00:08:29.771 [2024-12-05 20:28:22.979820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.771 [2024-12-05 20:28:22.979870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff31 cdw11:00000000 00:08:29.771 [2024-12-05 20:28:22.979884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.771 [2024-12-05 20:28:22.979932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff30 cdw11:00000000 00:08:29.771 [2024-12-05 20:28:22.979945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.771 [2024-12-05 20:28:22.979993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.771 [2024-12-05 20:28:22.980006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.771 #27 NEW cov: 12328 ft: 14570 corp: 22/178b lim: 10 exec/s: 27 rss: 75Mb L: 9/10 MS: 1 ShuffleBytes- 00:08:29.771 [2024-12-05 20:28:23.040081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.771 [2024-12-05 20:28:23.040110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.771 [2024-12-05 20:28:23.040176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000feff cdw11:00000000 00:08:29.771 [2024-12-05 20:28:23.040190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.040239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.040253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.040300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.040313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.040363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.040376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:29.772 #28 NEW cov: 12328 ft: 14577 corp: 23/188b lim: 10 exec/s: 28 rss: 75Mb L: 10/10 MS: 1 ChangeBit- 00:08:29.772 [2024-12-05 20:28:23.100214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.100238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.100303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.100317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.100366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.100380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.100428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.100442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.100489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.100503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:29.772 #29 NEW cov: 12328 ft: 14584 corp: 24/198b lim: 10 exec/s: 29 rss: 75Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:29.772 [2024-12-05 20:28:23.140065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031f7 cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.140089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.140157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.140171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.140221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.140234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.772 #30 NEW cov: 12328 ft: 14722 corp: 25/205b lim: 10 exec/s: 30 rss: 75Mb L: 7/10 MS: 1 CopyPart- 00:08:29.772 [2024-12-05 20:28:23.200436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.200461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.200513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.200526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.200573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.200586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.200636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.200649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:29.772 [2024-12-05 20:28:23.200696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000ff cdw11:00000000 00:08:29.772 [2024-12-05 20:28:23.200709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:30.032 #31 NEW cov: 12328 ft: 14752 corp: 26/215b lim: 10 exec/s: 31 rss: 75Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:30.032 [2024-12-05 20:28:23.240286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.240311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.240378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.240392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.240443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.240456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.032 #32 NEW cov: 12328 ft: 14804 corp: 27/222b lim: 10 exec/s: 32 rss: 76Mb L: 7/10 MS: 1 EraseBytes- 00:08:30.032 [2024-12-05 20:28:23.300726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.300756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.300834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.300847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.300896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001500 cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.300910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.300959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.300972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.301020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.301036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:30.032 #33 NEW cov: 12328 ft: 14817 corp: 28/232b lim: 10 exec/s: 33 rss: 76Mb L: 10/10 MS: 1 CMP- DE: "\025\000"- 00:08:30.032 [2024-12-05 20:28:23.360543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff31 cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.360570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.360620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff21 cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.360634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.032 #34 NEW cov: 12328 ft: 14892 corp: 29/237b lim: 10 exec/s: 34 rss: 76Mb L: 5/10 MS: 1 ChangeByte- 00:08:30.032 [2024-12-05 20:28:23.420814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.420841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.420908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.420922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.420971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ebff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.420985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.032 #35 NEW cov: 12328 ft: 14920 corp: 30/243b lim: 10 exec/s: 35 rss: 76Mb L: 6/10 MS: 1 InsertByte- 00:08:30.032 [2024-12-05 20:28:23.460790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.460827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.032 [2024-12-05 20:28:23.460892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:30.032 [2024-12-05 20:28:23.460906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.291 #36 NEW cov: 12328 ft: 14931 corp: 31/248b lim: 10 exec/s: 36 rss: 76Mb L: 5/10 MS: 1 EraseBytes- 00:08:30.291 [2024-12-05 20:28:23.501159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.501184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.501235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fffa cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.501249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.501299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.501312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.501363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.501376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.291 #37 NEW cov: 12328 ft: 14935 corp: 32/256b lim: 10 exec/s: 37 rss: 76Mb L: 8/10 MS: 1 ChangeBinInt- 00:08:30.291 [2024-12-05 20:28:23.541370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.541397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.541446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.541460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.541508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.541522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.541571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000032ff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.541584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.541631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.541644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:30.291 #38 NEW cov: 12328 ft: 14965 corp: 33/266b lim: 10 exec/s: 38 rss: 76Mb L: 10/10 MS: 1 ChangeByte- 00:08:30.291 [2024-12-05 20:28:23.581321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.581347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.581415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff30 cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.581429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.581480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.581493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.581542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.581556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.291 #39 NEW cov: 12328 ft: 14984 corp: 34/274b lim: 10 exec/s: 39 rss: 76Mb L: 8/10 MS: 1 ShuffleBytes- 00:08:30.291 [2024-12-05 20:28:23.621560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000031ff cdw11:00000000 00:08:30.291 [2024-12-05 20:28:23.621585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.291 [2024-12-05 20:28:23.621635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.292 [2024-12-05 20:28:23.621648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.292 [2024-12-05 20:28:23.621696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:30.292 [2024-12-05 20:28:23.621709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.292 [2024-12-05 20:28:23.621758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000032ff cdw11:00000000 00:08:30.292 [2024-12-05 20:28:23.621788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:30.292 [2024-12-05 20:28:23.621840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000feff cdw11:00000000 00:08:30.292 [2024-12-05 20:28:23.621854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:30.292 #40 NEW cov: 12328 ft: 15009 corp: 35/284b lim: 10 exec/s: 20 rss: 76Mb L: 10/10 MS: 1 ChangeBit- 00:08:30.292 #40 DONE cov: 12328 ft: 15009 corp: 35/284b lim: 10 exec/s: 20 rss: 76Mb 00:08:30.292 ###### Recommended dictionary. ###### 00:08:30.292 "\377\377" # Uses: 1 00:08:30.292 "\025\000" # Uses: 0 00:08:30.292 ###### End of recommended dictionary. ###### 00:08:30.292 Done 40 runs in 2 second(s) 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:30.551 20:28:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:08:30.551 [2024-12-05 20:28:23.826896] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:30.551 [2024-12-05 20:28:23.826971] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842327 ] 00:08:30.809 [2024-12-05 20:28:24.133821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.809 [2024-12-05 20:28:24.192145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.068 [2024-12-05 20:28:24.251320] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.068 [2024-12-05 20:28:24.267559] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:08:31.068 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.068 INFO: Seed: 3656449972 00:08:31.068 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:31.068 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:31.068 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:31.068 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.068 #2 INITED exec/s: 0 rss: 67Mb 00:08:31.068 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.068 This may also happen if the target rejected all inputs we tried so far 00:08:31.068 [2024-12-05 20:28:24.323013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:31.068 [2024-12-05 20:28:24.323046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.326 NEW_FUNC[1/714]: 0x447128 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:08:31.326 NEW_FUNC[2/714]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.326 #3 NEW cov: 12095 ft: 12092 corp: 2/3b lim: 10 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 CrossOver- 00:08:31.326 [2024-12-05 20:28:24.665753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:31.326 [2024-12-05 20:28:24.665806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.326 [2024-12-05 20:28:24.665897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.326 [2024-12-05 20:28:24.665918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.326 [2024-12-05 20:28:24.666013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.326 [2024-12-05 20:28:24.666034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.326 NEW_FUNC[1/1]: 0x195a6b8 in _nvme_qpair_complete_abort_queued_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:593 00:08:31.326 #4 NEW cov: 12214 ft: 12884 corp: 3/10b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:08:31.326 [2024-12-05 20:28:24.745550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a46 cdw11:00000000 00:08:31.326 [2024-12-05 20:28:24.745583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.585 #5 NEW cov: 12220 ft: 13063 corp: 4/12b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 ChangeByte- 00:08:31.585 [2024-12-05 20:28:24.795866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.795897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.585 #6 NEW cov: 12305 ft: 13312 corp: 5/14b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 CrossOver- 00:08:31.585 [2024-12-05 20:28:24.846951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.846981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.585 [2024-12-05 20:28:24.847080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.847097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.585 [2024-12-05 20:28:24.847186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.847202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.585 #7 NEW cov: 12305 ft: 13387 corp: 6/21b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 CrossOver- 00:08:31.585 [2024-12-05 20:28:24.916899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a09 cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.916929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.585 #11 NEW cov: 12305 ft: 13465 corp: 7/23b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 4 ChangeBit-ShuffleBytes-ChangeBit-CrossOver- 00:08:31.585 [2024-12-05 20:28:24.967838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.967867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.585 [2024-12-05 20:28:24.967953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.967971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.585 [2024-12-05 20:28:24.968062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.968078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.585 [2024-12-05 20:28:24.968161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000400a cdw11:00000000 00:08:31.585 [2024-12-05 20:28:24.968179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.585 #12 NEW cov: 12305 ft: 13850 corp: 8/31b lim: 10 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 InsertByte- 00:08:31.857 [2024-12-05 20:28:25.038258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:31.857 [2024-12-05 20:28:25.038288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.857 [2024-12-05 20:28:25.038379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.857 [2024-12-05 20:28:25.038395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.857 [2024-12-05 20:28:25.038480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:31.857 [2024-12-05 20:28:25.038495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.857 [2024-12-05 20:28:25.038588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000400a cdw11:00000000 00:08:31.857 [2024-12-05 20:28:25.038606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.857 #13 NEW cov: 12305 ft: 13903 corp: 9/39b lim: 10 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:31.857 [2024-12-05 20:28:25.107870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a09 cdw11:00000000 00:08:31.857 [2024-12-05 20:28:25.107897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.857 #14 NEW cov: 12305 ft: 13958 corp: 10/42b lim: 10 exec/s: 0 rss: 74Mb L: 3/8 MS: 1 CrossOver- 00:08:31.857 [2024-12-05 20:28:25.178089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a4a cdw11:00000000 00:08:31.857 [2024-12-05 20:28:25.178116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.857 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:31.857 #15 NEW cov: 12328 ft: 14021 corp: 11/44b lim: 10 exec/s: 0 rss: 74Mb L: 2/8 MS: 1 ChangeBit- 00:08:31.857 [2024-12-05 20:28:25.248449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a46 cdw11:00000000 00:08:31.857 [2024-12-05 20:28:25.248479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.857 #16 NEW cov: 12328 ft: 14081 corp: 12/46b lim: 10 exec/s: 0 rss: 74Mb L: 2/8 MS: 1 CopyPart- 00:08:32.119 [2024-12-05 20:28:25.319468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.319496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.119 [2024-12-05 20:28:25.319582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004000 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.319598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.119 [2024-12-05 20:28:25.319683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.319699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.119 [2024-12-05 20:28:25.319795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000400a cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.319811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.119 #22 NEW cov: 12328 ft: 14135 corp: 13/54b lim: 10 exec/s: 22 rss: 74Mb L: 8/8 MS: 1 CopyPart- 00:08:32.119 [2024-12-05 20:28:25.390236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.390264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.119 [2024-12-05 20:28:25.390355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.390373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.119 [2024-12-05 20:28:25.390459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.390475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.119 [2024-12-05 20:28:25.390557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.390572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.119 [2024-12-05 20:28:25.390661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a46 cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.390678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:32.119 #23 NEW cov: 12328 ft: 14210 corp: 14/64b lim: 10 exec/s: 23 rss: 75Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:08:32.119 [2024-12-05 20:28:25.459485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.459512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.119 #24 NEW cov: 12328 ft: 14221 corp: 15/67b lim: 10 exec/s: 24 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:08:32.119 [2024-12-05 20:28:25.509724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:08:32.119 [2024-12-05 20:28:25.509756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.119 #25 NEW cov: 12328 ft: 14253 corp: 16/70b lim: 10 exec/s: 25 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:08:32.378 [2024-12-05 20:28:25.560270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.560297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.378 [2024-12-05 20:28:25.560390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.560406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.378 [2024-12-05 20:28:25.560494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.560511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.378 [2024-12-05 20:28:25.560593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.560610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.378 #26 NEW cov: 12328 ft: 14273 corp: 17/78b lim: 10 exec/s: 26 rss: 75Mb L: 8/10 MS: 1 CrossOver- 00:08:32.378 [2024-12-05 20:28:25.610657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3d cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.610684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.378 [2024-12-05 20:28:25.610778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.610795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.378 #27 NEW cov: 12328 ft: 14436 corp: 18/82b lim: 10 exec/s: 27 rss: 75Mb L: 4/10 MS: 1 CrossOver- 00:08:32.378 [2024-12-05 20:28:25.680652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.680678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.378 #28 NEW cov: 12328 ft: 14456 corp: 19/85b lim: 10 exec/s: 28 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:08:32.378 [2024-12-05 20:28:25.752131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.752159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.378 [2024-12-05 20:28:25.752246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002500 cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.752263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.378 [2024-12-05 20:28:25.752352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.752368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.378 [2024-12-05 20:28:25.752455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000400a cdw11:00000000 00:08:32.378 [2024-12-05 20:28:25.752470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.637 [2024-12-05 20:28:25.822399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.637 [2024-12-05 20:28:25.822426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.637 [2024-12-05 20:28:25.822512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002540 cdw11:00000000 00:08:32.637 [2024-12-05 20:28:25.822533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.637 [2024-12-05 20:28:25.822612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.637 [2024-12-05 20:28:25.822627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.822712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000400a cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.822727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.638 #30 NEW cov: 12328 ft: 14474 corp: 20/93b lim: 10 exec/s: 30 rss: 75Mb L: 8/10 MS: 2 ChangeByte-ChangeBit- 00:08:32.638 [2024-12-05 20:28:25.871717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00009d3b cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.871748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.638 #35 NEW cov: 12328 ft: 14494 corp: 21/95b lim: 10 exec/s: 35 rss: 75Mb L: 2/10 MS: 5 ShuffleBytes-ChangeByte-CopyPart-CopyPart-InsertByte- 00:08:32.638 [2024-12-05 20:28:25.922939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.922964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.923047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.923064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.923149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.923164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.923253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.923269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.923352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a06 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.923368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:32.638 #36 NEW cov: 12328 ft: 14507 corp: 22/105b lim: 10 exec/s: 36 rss: 75Mb L: 10/10 MS: 1 ChangeBit- 00:08:32.638 [2024-12-05 20:28:25.993343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.993370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.993459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.993476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.993560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002525 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.993575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.993659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002e25 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.993678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.638 [2024-12-05 20:28:25.993761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000a46 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:25.993793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:32.638 #37 NEW cov: 12328 ft: 14513 corp: 23/115b lim: 10 exec/s: 37 rss: 75Mb L: 10/10 MS: 1 ChangeByte- 00:08:32.638 [2024-12-05 20:28:26.042565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a09 cdw11:00000000 00:08:32.638 [2024-12-05 20:28:26.042595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.897 #38 NEW cov: 12328 ft: 14629 corp: 24/118b lim: 10 exec/s: 38 rss: 75Mb L: 3/10 MS: 1 ShuffleBytes- 00:08:32.897 [2024-12-05 20:28:26.113782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.113810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.897 [2024-12-05 20:28:26.113907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.113924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.897 [2024-12-05 20:28:26.114012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005b00 cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.114029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.897 [2024-12-05 20:28:26.114115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.114133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.897 #39 NEW cov: 12328 ft: 14645 corp: 25/127b lim: 10 exec/s: 39 rss: 75Mb L: 9/10 MS: 1 InsertByte- 00:08:32.897 [2024-12-05 20:28:26.183373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.183402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.897 #40 NEW cov: 12328 ft: 14668 corp: 26/130b lim: 10 exec/s: 40 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:08:32.897 [2024-12-05 20:28:26.234105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a00 cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.234135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.897 [2024-12-05 20:28:26.234226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.234243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.897 [2024-12-05 20:28:26.234333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.234349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.897 #41 NEW cov: 12328 ft: 14747 corp: 27/137b lim: 10 exec/s: 41 rss: 75Mb L: 7/10 MS: 1 ChangeBit- 00:08:32.897 [2024-12-05 20:28:26.283967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:32.897 [2024-12-05 20:28:26.283996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.897 #42 NEW cov: 12328 ft: 14751 corp: 28/140b lim: 10 exec/s: 21 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:08:32.897 #42 DONE cov: 12328 ft: 14751 corp: 28/140b lim: 10 exec/s: 21 rss: 75Mb 00:08:32.897 Done 42 runs in 2 second(s) 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:33.156 20:28:26 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:08:33.156 [2024-12-05 20:28:26.461176] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:33.156 [2024-12-05 20:28:26.461250] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842697 ] 00:08:33.414 [2024-12-05 20:28:26.668271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.414 [2024-12-05 20:28:26.709104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.414 [2024-12-05 20:28:26.768644] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.414 [2024-12-05 20:28:26.784887] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:08:33.414 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.414 INFO: Seed: 1879466229 00:08:33.414 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:33.414 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:33.414 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:33.414 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.414 [2024-12-05 20:28:26.840362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.414 [2024-12-05 20:28:26.840394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.672 #2 INITED cov: 12128 ft: 12126 corp: 1/1b exec/s: 0 rss: 73Mb 00:08:33.672 [2024-12-05 20:28:26.880349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.672 [2024-12-05 20:28:26.880375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.672 #3 NEW cov: 12241 ft: 12663 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 ChangeBit- 00:08:33.673 [2024-12-05 20:28:26.940682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:26.940707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.673 [2024-12-05 20:28:26.940780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:26.940796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.673 #4 NEW cov: 12247 ft: 13607 corp: 3/4b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:08:33.673 [2024-12-05 20:28:26.980764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:26.980789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.673 [2024-12-05 20:28:26.980847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:26.980860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.673 #5 NEW cov: 12332 ft: 13845 corp: 4/6b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:08:33.673 [2024-12-05 20:28:27.041121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:27.041145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.673 [2024-12-05 20:28:27.041219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:27.041234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.673 [2024-12-05 20:28:27.041287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:27.041302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.673 #6 NEW cov: 12332 ft: 14112 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:08:33.673 [2024-12-05 20:28:27.101426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:27.101452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.673 [2024-12-05 20:28:27.101511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:27.101526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.673 [2024-12-05 20:28:27.101584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:27.101602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.673 [2024-12-05 20:28:27.101659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.673 [2024-12-05 20:28:27.101673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.931 #7 NEW cov: 12332 ft: 14418 corp: 6/13b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:08:33.931 [2024-12-05 20:28:27.141215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.141241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.931 [2024-12-05 20:28:27.141312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.141327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.931 #8 NEW cov: 12332 ft: 14480 corp: 7/15b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 CopyPart- 00:08:33.931 [2024-12-05 20:28:27.181596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.181621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.931 [2024-12-05 20:28:27.181676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.181690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.931 [2024-12-05 20:28:27.181748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.181762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.931 [2024-12-05 20:28:27.181834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.181848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.931 #9 NEW cov: 12332 ft: 14538 corp: 8/19b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:08:33.931 [2024-12-05 20:28:27.221425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.221450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.931 [2024-12-05 20:28:27.221505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.221519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.931 #10 NEW cov: 12332 ft: 14587 corp: 9/21b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:08:33.931 [2024-12-05 20:28:27.281611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.281637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.931 [2024-12-05 20:28:27.281713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.281727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.931 #11 NEW cov: 12332 ft: 14670 corp: 10/23b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 InsertByte- 00:08:33.931 [2024-12-05 20:28:27.321716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.321741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.931 [2024-12-05 20:28:27.321820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.931 [2024-12-05 20:28:27.321834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.931 #12 NEW cov: 12332 ft: 14682 corp: 11/25b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:08:33.932 [2024-12-05 20:28:27.361654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:33.932 [2024-12-05 20:28:27.361679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.190 #13 NEW cov: 12332 ft: 14712 corp: 12/26b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:08:34.190 [2024-12-05 20:28:27.401959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.401983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.402057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.402071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.190 #14 NEW cov: 12332 ft: 14728 corp: 13/28b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:08:34.190 [2024-12-05 20:28:27.442378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.442404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.442461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.442475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.442532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.442546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.442602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.442615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.190 #15 NEW cov: 12332 ft: 14782 corp: 14/32b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:08:34.190 [2024-12-05 20:28:27.482506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.482535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.482591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.482605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.482661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.482675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.482731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.482749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.190 #16 NEW cov: 12332 ft: 14819 corp: 15/36b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:08:34.190 [2024-12-05 20:28:27.542353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.542378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.190 [2024-12-05 20:28:27.542435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.542448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.190 #17 NEW cov: 12332 ft: 14838 corp: 16/38b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:34.190 [2024-12-05 20:28:27.582269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.190 [2024-12-05 20:28:27.582294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.190 #18 NEW cov: 12332 ft: 14856 corp: 17/39b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:08:34.191 [2024-12-05 20:28:27.622599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.191 [2024-12-05 20:28:27.622625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.191 [2024-12-05 20:28:27.622683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.191 [2024-12-05 20:28:27.622697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.449 #19 NEW cov: 12332 ft: 14937 corp: 18/41b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:08:34.449 [2024-12-05 20:28:27.662959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.449 [2024-12-05 20:28:27.662985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.449 [2024-12-05 20:28:27.663043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.449 [2024-12-05 20:28:27.663057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.449 [2024-12-05 20:28:27.663117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.449 [2024-12-05 20:28:27.663130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.449 [2024-12-05 20:28:27.663187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.449 [2024-12-05 20:28:27.663200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.449 #20 NEW cov: 12332 ft: 14964 corp: 19/45b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:08:34.449 [2024-12-05 20:28:27.722828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.449 [2024-12-05 20:28:27.722853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.449 [2024-12-05 20:28:27.722938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.449 [2024-12-05 20:28:27.722952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.708 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:34.708 #21 NEW cov: 12355 ft: 15060 corp: 20/47b lim: 5 exec/s: 21 rss: 74Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:34.708 [2024-12-05 20:28:28.043952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.043988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.044043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.044056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.044112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.044126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.044179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.044192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.708 #22 NEW cov: 12355 ft: 15137 corp: 21/51b lim: 5 exec/s: 22 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:08:34.708 [2024-12-05 20:28:28.104038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.104064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.104139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.104153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.104209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.104226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.104283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.104297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.708 #23 NEW cov: 12355 ft: 15148 corp: 22/55b lim: 5 exec/s: 23 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:08:34.708 [2024-12-05 20:28:28.144213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.144238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.144293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.144308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.144363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.144377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.708 [2024-12-05 20:28:28.144432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.708 [2024-12-05 20:28:28.144445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.967 #24 NEW cov: 12355 ft: 15158 corp: 23/59b lim: 5 exec/s: 24 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:08:34.967 [2024-12-05 20:28:28.183956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.183983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.967 [2024-12-05 20:28:28.184038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.184051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.967 #25 NEW cov: 12355 ft: 15166 corp: 24/61b lim: 5 exec/s: 25 rss: 75Mb L: 2/4 MS: 1 CrossOver- 00:08:34.967 [2024-12-05 20:28:28.224222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.224247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.967 [2024-12-05 20:28:28.224319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.224334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.967 [2024-12-05 20:28:28.224388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.224402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.967 #26 NEW cov: 12355 ft: 15177 corp: 25/64b lim: 5 exec/s: 26 rss: 75Mb L: 3/4 MS: 1 CrossOver- 00:08:34.967 [2024-12-05 20:28:28.284488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.284513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.967 [2024-12-05 20:28:28.284586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.284601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.967 [2024-12-05 20:28:28.284657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.967 [2024-12-05 20:28:28.284670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:34.967 [2024-12-05 20:28:28.284726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.968 [2024-12-05 20:28:28.284739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:34.968 #27 NEW cov: 12355 ft: 15199 corp: 26/68b lim: 5 exec/s: 27 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:34.968 [2024-12-05 20:28:28.344365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.968 [2024-12-05 20:28:28.344389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.968 [2024-12-05 20:28:28.344462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:34.968 [2024-12-05 20:28:28.344476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.968 #28 NEW cov: 12355 ft: 15211 corp: 27/70b lim: 5 exec/s: 28 rss: 75Mb L: 2/4 MS: 1 ChangeBinInt- 00:08:35.226 [2024-12-05 20:28:28.404548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.404574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.404632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.404646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.226 #29 NEW cov: 12355 ft: 15217 corp: 28/72b lim: 5 exec/s: 29 rss: 75Mb L: 2/4 MS: 1 ChangeByte- 00:08:35.226 [2024-12-05 20:28:28.464706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.464731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.464808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.464823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.226 #30 NEW cov: 12355 ft: 15249 corp: 29/74b lim: 5 exec/s: 30 rss: 75Mb L: 2/4 MS: 1 ChangeBit- 00:08:35.226 [2024-12-05 20:28:28.525037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.525067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.525139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.525153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.525209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.525222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.226 #31 NEW cov: 12355 ft: 15264 corp: 30/77b lim: 5 exec/s: 31 rss: 75Mb L: 3/4 MS: 1 InsertByte- 00:08:35.226 [2024-12-05 20:28:28.585532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.585556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.585625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.585639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.585694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.585708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.585769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.585783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.585848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.585861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:35.226 #32 NEW cov: 12355 ft: 15339 corp: 31/82b lim: 5 exec/s: 32 rss: 75Mb L: 5/5 MS: 1 InsertByte- 00:08:35.226 [2024-12-05 20:28:28.625201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.625227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.226 [2024-12-05 20:28:28.625286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.226 [2024-12-05 20:28:28.625300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.226 #33 NEW cov: 12355 ft: 15372 corp: 32/84b lim: 5 exec/s: 33 rss: 75Mb L: 2/5 MS: 1 ChangeBit- 00:08:35.484 [2024-12-05 20:28:28.665580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.484 [2024-12-05 20:28:28.665606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.484 [2024-12-05 20:28:28.665668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.484 [2024-12-05 20:28:28.665682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.484 [2024-12-05 20:28:28.665737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.484 [2024-12-05 20:28:28.665757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.484 [2024-12-05 20:28:28.665812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.484 [2024-12-05 20:28:28.665825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.484 #34 NEW cov: 12355 ft: 15388 corp: 33/88b lim: 5 exec/s: 34 rss: 75Mb L: 4/5 MS: 1 ShuffleBytes- 00:08:35.484 [2024-12-05 20:28:28.725433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.484 [2024-12-05 20:28:28.725458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.484 [2024-12-05 20:28:28.725516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.485 [2024-12-05 20:28:28.725530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.485 #35 NEW cov: 12355 ft: 15392 corp: 34/90b lim: 5 exec/s: 35 rss: 75Mb L: 2/5 MS: 1 ChangeBit- 00:08:35.485 [2024-12-05 20:28:28.765511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.485 [2024-12-05 20:28:28.765535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.485 [2024-12-05 20:28:28.765608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.485 [2024-12-05 20:28:28.765623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.485 #36 NEW cov: 12355 ft: 15432 corp: 35/92b lim: 5 exec/s: 36 rss: 75Mb L: 2/5 MS: 1 ChangeBit- 00:08:35.485 [2024-12-05 20:28:28.826029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.485 [2024-12-05 20:28:28.826055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.485 [2024-12-05 20:28:28.826112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.485 [2024-12-05 20:28:28.826126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.485 [2024-12-05 20:28:28.826180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.485 [2024-12-05 20:28:28.826193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.485 [2024-12-05 20:28:28.826246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:35.485 [2024-12-05 20:28:28.826260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:35.485 #37 NEW cov: 12355 ft: 15438 corp: 36/96b lim: 5 exec/s: 18 rss: 75Mb L: 4/5 MS: 1 ChangeByte- 00:08:35.485 #37 DONE cov: 12355 ft: 15438 corp: 36/96b lim: 5 exec/s: 18 rss: 75Mb 00:08:35.485 Done 37 runs in 2 second(s) 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:35.743 20:28:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:35.743 20:28:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:35.743 20:28:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:35.743 20:28:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:08:35.743 [2024-12-05 20:28:29.032230] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:35.743 [2024-12-05 20:28:29.032304] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842974 ] 00:08:36.001 [2024-12-05 20:28:29.244657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.001 [2024-12-05 20:28:29.282614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.001 [2024-12-05 20:28:29.342061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.001 [2024-12-05 20:28:29.358297] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:36.001 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.001 INFO: Seed: 157505044 00:08:36.001 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:36.001 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:36.001 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:36.001 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.001 [2024-12-05 20:28:29.413770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.001 [2024-12-05 20:28:29.413801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.001 #2 INITED cov: 12100 ft: 12125 corp: 1/1b exec/s: 0 rss: 73Mb 00:08:36.259 [2024-12-05 20:28:29.453958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.259 [2024-12-05 20:28:29.453984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.259 [2024-12-05 20:28:29.454054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.259 [2024-12-05 20:28:29.454069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.517 NEW_FUNC[1/1]: 0x1faa948 in thread_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1080 00:08:36.517 #3 NEW cov: 12241 ft: 13377 corp: 2/3b lim: 5 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 InsertByte- 00:08:36.517 [2024-12-05 20:28:29.794786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.517 [2024-12-05 20:28:29.794826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.517 #4 NEW cov: 12247 ft: 13566 corp: 3/4b lim: 5 exec/s: 0 rss: 74Mb L: 1/2 MS: 1 ChangeByte- 00:08:36.517 [2024-12-05 20:28:29.834888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.517 [2024-12-05 20:28:29.834914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.517 [2024-12-05 20:28:29.834969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.517 [2024-12-05 20:28:29.834984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.517 #5 NEW cov: 12332 ft: 13927 corp: 4/6b lim: 5 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 CopyPart- 00:08:36.517 [2024-12-05 20:28:29.895008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.517 [2024-12-05 20:28:29.895036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.517 [2024-12-05 20:28:29.895092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.517 [2024-12-05 20:28:29.895105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.517 #6 NEW cov: 12332 ft: 13987 corp: 5/8b lim: 5 exec/s: 0 rss: 74Mb L: 2/2 MS: 1 ShuffleBytes- 00:08:36.775 [2024-12-05 20:28:29.955194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.775 [2024-12-05 20:28:29.955222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.775 [2024-12-05 20:28:29.955279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.775 [2024-12-05 20:28:29.955294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.775 #7 NEW cov: 12332 ft: 14067 corp: 6/10b lim: 5 exec/s: 0 rss: 75Mb L: 2/2 MS: 1 CopyPart- 00:08:36.775 [2024-12-05 20:28:30.015531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.775 [2024-12-05 20:28:30.015559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.775 [2024-12-05 20:28:30.015621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.775 [2024-12-05 20:28:30.015636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.775 [2024-12-05 20:28:30.015690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.015704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.776 #8 NEW cov: 12332 ft: 14435 corp: 7/13b lim: 5 exec/s: 0 rss: 75Mb L: 3/3 MS: 1 CrossOver- 00:08:36.776 [2024-12-05 20:28:30.055653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.055681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.776 [2024-12-05 20:28:30.055740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.055761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.776 [2024-12-05 20:28:30.055817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.055831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.776 #9 NEW cov: 12332 ft: 14503 corp: 8/16b lim: 5 exec/s: 0 rss: 75Mb L: 3/3 MS: 1 CrossOver- 00:08:36.776 [2024-12-05 20:28:30.095557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.095583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.776 [2024-12-05 20:28:30.095641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.095655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.776 #10 NEW cov: 12332 ft: 14528 corp: 9/18b lim: 5 exec/s: 0 rss: 75Mb L: 2/3 MS: 1 EraseBytes- 00:08:36.776 [2024-12-05 20:28:30.155733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.155762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.776 [2024-12-05 20:28:30.155834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:36.776 [2024-12-05 20:28:30.155849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.776 #11 NEW cov: 12332 ft: 14589 corp: 10/20b lim: 5 exec/s: 0 rss: 75Mb L: 2/3 MS: 1 EraseBytes- 00:08:37.034 [2024-12-05 20:28:30.215742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.215776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.034 #12 NEW cov: 12332 ft: 14659 corp: 11/21b lim: 5 exec/s: 0 rss: 75Mb L: 1/3 MS: 1 EraseBytes- 00:08:37.034 [2024-12-05 20:28:30.256021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.256049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.034 [2024-12-05 20:28:30.256105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.256119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.034 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:37.034 #13 NEW cov: 12355 ft: 14689 corp: 12/23b lim: 5 exec/s: 0 rss: 75Mb L: 2/3 MS: 1 ShuffleBytes- 00:08:37.034 [2024-12-05 20:28:30.316307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.316334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.034 [2024-12-05 20:28:30.316392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.316406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.034 [2024-12-05 20:28:30.316477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.316493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.034 #14 NEW cov: 12355 ft: 14713 corp: 13/26b lim: 5 exec/s: 0 rss: 75Mb L: 3/3 MS: 1 InsertByte- 00:08:37.034 [2024-12-05 20:28:30.376176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.376204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.034 #15 NEW cov: 12355 ft: 14769 corp: 14/27b lim: 5 exec/s: 15 rss: 75Mb L: 1/3 MS: 1 ShuffleBytes- 00:08:37.034 [2024-12-05 20:28:30.416571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.416598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.034 [2024-12-05 20:28:30.416672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.034 [2024-12-05 20:28:30.416686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.034 [2024-12-05 20:28:30.416748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.035 [2024-12-05 20:28:30.416763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.035 #16 NEW cov: 12355 ft: 14787 corp: 15/30b lim: 5 exec/s: 16 rss: 75Mb L: 3/3 MS: 1 InsertByte- 00:08:37.293 [2024-12-05 20:28:30.476735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.476768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.476823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.476841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.476898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.476912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.293 #17 NEW cov: 12355 ft: 14822 corp: 16/33b lim: 5 exec/s: 17 rss: 75Mb L: 3/3 MS: 1 CopyPart- 00:08:37.293 [2024-12-05 20:28:30.536902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.536928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.536987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.537001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.537057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.537071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.293 #18 NEW cov: 12355 ft: 14875 corp: 17/36b lim: 5 exec/s: 18 rss: 75Mb L: 3/3 MS: 1 CopyPart- 00:08:37.293 [2024-12-05 20:28:30.576904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.576931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.576988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.577001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.293 #19 NEW cov: 12355 ft: 14881 corp: 18/38b lim: 5 exec/s: 19 rss: 75Mb L: 2/3 MS: 1 ChangeByte- 00:08:37.293 [2024-12-05 20:28:30.617109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.617134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.617191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.617205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.617259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.617272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.293 #20 NEW cov: 12355 ft: 14890 corp: 19/41b lim: 5 exec/s: 20 rss: 75Mb L: 3/3 MS: 1 ChangeByte- 00:08:37.293 [2024-12-05 20:28:30.677490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.677515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.677576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.677591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.677646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.677660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.293 [2024-12-05 20:28:30.677716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.293 [2024-12-05 20:28:30.677730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:37.293 #21 NEW cov: 12355 ft: 15172 corp: 20/45b lim: 5 exec/s: 21 rss: 75Mb L: 4/4 MS: 1 InsertByte- 00:08:37.551 [2024-12-05 20:28:30.737312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.737338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.551 [2024-12-05 20:28:30.737394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.737407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.551 #22 NEW cov: 12355 ft: 15202 corp: 21/47b lim: 5 exec/s: 22 rss: 75Mb L: 2/4 MS: 1 EraseBytes- 00:08:37.551 [2024-12-05 20:28:30.777258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.777283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.551 [2024-12-05 20:28:30.837606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.837631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.551 [2024-12-05 20:28:30.837703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.837718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.551 #24 NEW cov: 12355 ft: 15223 corp: 22/49b lim: 5 exec/s: 24 rss: 75Mb L: 2/4 MS: 2 ChangeByte-InsertByte- 00:08:37.551 [2024-12-05 20:28:30.877856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.877882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.551 [2024-12-05 20:28:30.877940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.877954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.551 [2024-12-05 20:28:30.878010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.878027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:37.551 #25 NEW cov: 12355 ft: 15238 corp: 23/52b lim: 5 exec/s: 25 rss: 75Mb L: 3/4 MS: 1 ChangeBinInt- 00:08:37.551 [2024-12-05 20:28:30.917796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.917823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.551 [2024-12-05 20:28:30.917906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.917921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.551 #26 NEW cov: 12355 ft: 15246 corp: 24/54b lim: 5 exec/s: 26 rss: 75Mb L: 2/4 MS: 1 ChangeBit- 00:08:37.551 [2024-12-05 20:28:30.957795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.551 [2024-12-05 20:28:30.957820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.551 #27 NEW cov: 12355 ft: 15258 corp: 25/55b lim: 5 exec/s: 27 rss: 75Mb L: 1/4 MS: 1 ShuffleBytes- 00:08:37.809 [2024-12-05 20:28:30.998031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.809 [2024-12-05 20:28:30.998057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.809 [2024-12-05 20:28:30.998131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.809 [2024-12-05 20:28:30.998146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.809 #28 NEW cov: 12355 ft: 15317 corp: 26/57b lim: 5 exec/s: 28 rss: 75Mb L: 2/4 MS: 1 EraseBytes- 00:08:37.809 [2024-12-05 20:28:31.038159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.809 [2024-12-05 20:28:31.038184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.809 [2024-12-05 20:28:31.038259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.809 [2024-12-05 20:28:31.038273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.809 #29 NEW cov: 12355 ft: 15332 corp: 27/59b lim: 5 exec/s: 29 rss: 75Mb L: 2/4 MS: 1 CopyPart- 00:08:37.809 [2024-12-05 20:28:31.098288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.810 [2024-12-05 20:28:31.098314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.810 [2024-12-05 20:28:31.098389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.810 [2024-12-05 20:28:31.098403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.810 #30 NEW cov: 12355 ft: 15333 corp: 28/61b lim: 5 exec/s: 30 rss: 75Mb L: 2/4 MS: 1 ChangeByte- 00:08:37.810 [2024-12-05 20:28:31.138244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.810 [2024-12-05 20:28:31.138272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.810 #31 NEW cov: 12355 ft: 15342 corp: 29/62b lim: 5 exec/s: 31 rss: 76Mb L: 1/4 MS: 1 ChangeByte- 00:08:37.810 [2024-12-05 20:28:31.198570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.810 [2024-12-05 20:28:31.198596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:37.810 [2024-12-05 20:28:31.198668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:37.810 [2024-12-05 20:28:31.198683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:37.810 #32 NEW cov: 12355 ft: 15373 corp: 30/64b lim: 5 exec/s: 32 rss: 76Mb L: 2/4 MS: 1 CrossOver- 00:08:38.068 [2024-12-05 20:28:31.258906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.258932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.068 [2024-12-05 20:28:31.259008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.259022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.068 [2024-12-05 20:28:31.259081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.259095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.068 #33 NEW cov: 12355 ft: 15431 corp: 31/67b lim: 5 exec/s: 33 rss: 76Mb L: 3/4 MS: 1 InsertByte- 00:08:38.068 [2024-12-05 20:28:31.318715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.318741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.068 #34 NEW cov: 12355 ft: 15444 corp: 32/68b lim: 5 exec/s: 34 rss: 76Mb L: 1/4 MS: 1 ChangeBit- 00:08:38.068 [2024-12-05 20:28:31.359118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.359143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.068 [2024-12-05 20:28:31.359200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.359214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.068 [2024-12-05 20:28:31.359288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.359302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.068 #35 NEW cov: 12355 ft: 15514 corp: 33/71b lim: 5 exec/s: 35 rss: 76Mb L: 3/4 MS: 1 ChangeBit- 00:08:38.068 [2024-12-05 20:28:31.399248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.399274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.068 [2024-12-05 20:28:31.399336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.068 [2024-12-05 20:28:31.399350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:38.069 [2024-12-05 20:28:31.399409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:38.069 [2024-12-05 20:28:31.399423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:38.069 #36 NEW cov: 12355 ft: 15556 corp: 34/74b lim: 5 exec/s: 18 rss: 76Mb L: 3/4 MS: 1 InsertByte- 00:08:38.069 #36 DONE cov: 12355 ft: 15556 corp: 34/74b lim: 5 exec/s: 18 rss: 76Mb 00:08:38.069 Done 36 runs in 2 second(s) 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:38.359 20:28:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:08:38.359 [2024-12-05 20:28:31.606829] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:38.359 [2024-12-05 20:28:31.606908] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843272 ] 00:08:38.671 [2024-12-05 20:28:31.824968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.671 [2024-12-05 20:28:31.863945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.671 [2024-12-05 20:28:31.923295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.671 [2024-12-05 20:28:31.939528] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:38.671 INFO: Running with entropic power schedule (0xFF, 100). 00:08:38.671 INFO: Seed: 2739500485 00:08:38.671 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:38.671 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:38.671 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:38.671 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.671 #2 INITED exec/s: 0 rss: 67Mb 00:08:38.671 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.671 This may also happen if the target rejected all inputs we tried so far 00:08:38.671 [2024-12-05 20:28:31.995037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3e027800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.671 [2024-12-05 20:28:31.995069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.961 NEW_FUNC[1/715]: 0x448aa8 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:38.961 NEW_FUNC[2/715]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:38.961 #9 NEW cov: 12131 ft: 12142 corp: 2/11b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 2 CopyPart-CMP- DE: "\035E\372\221>\002x\000"- 00:08:38.961 [2024-12-05 20:28:32.325765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9102fa45 cdw11:78003e1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.961 [2024-12-05 20:28:32.325802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.961 NEW_FUNC[1/1]: 0x1c4d4a8 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:595 00:08:38.961 #15 NEW cov: 12263 ft: 12636 corp: 3/21b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:38.961 [2024-12-05 20:28:32.386004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3e027800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.961 [2024-12-05 20:28:32.386033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:38.961 [2024-12-05 20:28:32.386091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a9102fa cdw11:0a457800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:38.961 [2024-12-05 20:28:32.386106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.219 #16 NEW cov: 12270 ft: 13127 corp: 4/41b lim: 40 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 CrossOver- 00:08:39.219 [2024-12-05 20:28:32.425914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3eed7800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-12-05 20:28:32.425939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.219 #17 NEW cov: 12355 ft: 13408 corp: 5/51b lim: 40 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 ChangeByte- 00:08:39.219 [2024-12-05 20:28:32.466019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-12-05 20:28:32.466044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.219 #18 NEW cov: 12355 ft: 13522 corp: 6/61b lim: 40 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 PersAutoDict- DE: "\035E\372\221>\002x\000"- 00:08:39.219 [2024-12-05 20:28:32.506119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9102fa45 cdw11:78003e1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-12-05 20:28:32.506145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.219 #19 NEW cov: 12355 ft: 13678 corp: 7/71b lim: 40 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 CopyPart- 00:08:39.219 [2024-12-05 20:28:32.566303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:911d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-12-05 20:28:32.566328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.219 #20 NEW cov: 12355 ft: 13725 corp: 8/81b lim: 40 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 PersAutoDict- DE: "\035E\372\221>\002x\000"- 00:08:39.219 [2024-12-05 20:28:32.626442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2c1d45fa cdw11:913eed78 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.219 [2024-12-05 20:28:32.626467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.477 #21 NEW cov: 12355 ft: 13751 corp: 9/92b lim: 40 exec/s: 0 rss: 74Mb L: 11/20 MS: 1 InsertByte- 00:08:39.478 [2024-12-05 20:28:32.686640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.686665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.478 #22 NEW cov: 12355 ft: 13771 corp: 10/102b lim: 40 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 PersAutoDict- DE: "\035E\372\221>\002x\000"- 00:08:39.478 [2024-12-05 20:28:32.726719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3eed7800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.726749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.478 #23 NEW cov: 12355 ft: 13854 corp: 11/112b lim: 40 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 ChangeBit- 00:08:39.478 [2024-12-05 20:28:32.766981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:33333333 cdw11:33333333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.767006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.478 [2024-12-05 20:28:32.767076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:33333333 cdw11:1d1d45fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.767090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.478 #24 NEW cov: 12355 ft: 13867 corp: 12/134b lim: 40 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:08:39.478 [2024-12-05 20:28:32.827145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:1d45fa91 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.827170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.478 [2024-12-05 20:28:32.827242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3e027800 cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.827256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.478 #25 NEW cov: 12355 ft: 13873 corp: 13/152b lim: 40 exec/s: 0 rss: 74Mb L: 18/22 MS: 1 PersAutoDict- DE: "\035E\372\221>\002x\000"- 00:08:39.478 [2024-12-05 20:28:32.867096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9102fa45 cdw11:78003e1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.867120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.478 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:39.478 #26 NEW cov: 12378 ft: 13904 corp: 14/163b lim: 40 exec/s: 0 rss: 74Mb L: 11/22 MS: 1 InsertByte- 00:08:39.478 [2024-12-05 20:28:32.907233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:911d45fa cdw11:913e0270 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.478 [2024-12-05 20:28:32.907257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.737 #27 NEW cov: 12378 ft: 13954 corp: 15/173b lim: 40 exec/s: 0 rss: 74Mb L: 10/22 MS: 1 ChangeBit- 00:08:39.737 [2024-12-05 20:28:32.967617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3eed7800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:32.967642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.737 [2024-12-05 20:28:32.967717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:32.967731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.737 [2024-12-05 20:28:32.967785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:32.967799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.737 #28 NEW cov: 12378 ft: 14204 corp: 16/202b lim: 40 exec/s: 28 rss: 74Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:08:39.737 [2024-12-05 20:28:33.007621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:33333333 cdw11:33333333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:33.007647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.737 [2024-12-05 20:28:33.007721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:33393333 cdw11:1d1d45fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:33.007736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.737 #29 NEW cov: 12378 ft: 14262 corp: 17/224b lim: 40 exec/s: 29 rss: 74Mb L: 22/29 MS: 1 ChangeBinInt- 00:08:39.737 [2024-12-05 20:28:33.067700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9102dc45 cdw11:78003e1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:33.067726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.737 #30 NEW cov: 12378 ft: 14376 corp: 18/235b lim: 40 exec/s: 30 rss: 75Mb L: 11/29 MS: 1 ChangeByte- 00:08:39.737 [2024-12-05 20:28:33.127951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:1d45fa6f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:33.127976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.737 [2024-12-05 20:28:33.128034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:c1fd8600 cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.737 [2024-12-05 20:28:33.128048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.737 #31 NEW cov: 12378 ft: 14386 corp: 19/253b lim: 40 exec/s: 31 rss: 75Mb L: 18/29 MS: 1 ChangeBinInt- 00:08:39.996 [2024-12-05 20:28:33.188211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3eed7800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.188237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.996 [2024-12-05 20:28:33.188306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.188319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.996 [2024-12-05 20:28:33.188391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.188406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:39.996 #32 NEW cov: 12378 ft: 14452 corp: 20/282b lim: 40 exec/s: 32 rss: 75Mb L: 29/29 MS: 1 ChangeASCIIInt- 00:08:39.996 [2024-12-05 20:28:33.248132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:911d45fa cdw11:91003e02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.248159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.996 #33 NEW cov: 12378 ft: 14477 corp: 21/293b lim: 40 exec/s: 33 rss: 75Mb L: 11/29 MS: 1 InsertByte- 00:08:39.996 [2024-12-05 20:28:33.308347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.308373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.996 #37 NEW cov: 12378 ft: 14483 corp: 22/303b lim: 40 exec/s: 37 rss: 75Mb L: 10/29 MS: 4 CrossOver-ChangeByte-EraseBytes-PersAutoDict- DE: "\035E\372\221>\002x\000"- 00:08:39.996 [2024-12-05 20:28:33.368513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2c291d45 cdw11:fa913eed SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.368540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.996 #38 NEW cov: 12378 ft: 14494 corp: 23/315b lim: 40 exec/s: 38 rss: 75Mb L: 12/29 MS: 1 InsertByte- 00:08:39.996 [2024-12-05 20:28:33.428967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3eed7800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.428992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:39.996 [2024-12-05 20:28:33.429053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.429067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:39.996 [2024-12-05 20:28:33.429123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.996 [2024-12-05 20:28:33.429136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.255 #39 NEW cov: 12378 ft: 14536 corp: 24/344b lim: 40 exec/s: 39 rss: 75Mb L: 29/29 MS: 1 CrossOver- 00:08:40.255 [2024-12-05 20:28:33.488842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:333333fa cdw11:913e0270 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.488868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.255 #40 NEW cov: 12378 ft: 14595 corp: 25/354b lim: 40 exec/s: 40 rss: 75Mb L: 10/29 MS: 1 CrossOver- 00:08:40.255 [2024-12-05 20:28:33.529185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d45fa91 cdw11:3eed7800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.529211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.255 [2024-12-05 20:28:33.529289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.529304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.255 [2024-12-05 20:28:33.529360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30300a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.529374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.255 #41 NEW cov: 12378 ft: 14608 corp: 26/378b lim: 40 exec/s: 41 rss: 75Mb L: 24/29 MS: 1 EraseBytes- 00:08:40.255 [2024-12-05 20:28:33.569038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.569064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.255 #42 NEW cov: 12378 ft: 14663 corp: 27/392b lim: 40 exec/s: 42 rss: 75Mb L: 14/29 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:40.255 [2024-12-05 20:28:33.609142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.609167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.255 #43 NEW cov: 12378 ft: 14677 corp: 28/406b lim: 40 exec/s: 43 rss: 75Mb L: 14/29 MS: 1 ChangeBit- 00:08:40.255 [2024-12-05 20:28:33.669699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:333333fa cdw11:913e021d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.669724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.255 [2024-12-05 20:28:33.669800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:45fa913e cdw11:ed780030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.255 [2024-12-05 20:28:33.669816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.256 [2024-12-05 20:28:33.669872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3070000a cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.256 [2024-12-05 20:28:33.669896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:40.256 [2024-12-05 20:28:33.669948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:30303030 cdw11:30303030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.256 [2024-12-05 20:28:33.669961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:40.514 #44 NEW cov: 12378 ft: 15152 corp: 29/440b lim: 40 exec/s: 44 rss: 75Mb L: 34/34 MS: 1 CrossOver- 00:08:40.514 [2024-12-05 20:28:33.729649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d913e33 cdw11:33333333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.514 [2024-12-05 20:28:33.729674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.514 [2024-12-05 20:28:33.729730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3933331d cdw11:1d45fa91 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.514 [2024-12-05 20:28:33.729748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.514 #47 NEW cov: 12378 ft: 15161 corp: 30/460b lim: 40 exec/s: 47 rss: 75Mb L: 20/34 MS: 3 EraseBytes-EraseBytes-CrossOver- 00:08:40.514 [2024-12-05 20:28:33.769594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:9102fa45 cdw11:78003e1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.514 [2024-12-05 20:28:33.769619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.514 #48 NEW cov: 12378 ft: 15163 corp: 31/470b lim: 40 exec/s: 48 rss: 75Mb L: 10/34 MS: 1 ShuffleBytes- 00:08:40.514 [2024-12-05 20:28:33.809863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:ed780030 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.514 [2024-12-05 20:28:33.809887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.514 [2024-12-05 20:28:33.809960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:3030300b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.514 [2024-12-05 20:28:33.809974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:40.514 #52 NEW cov: 12378 ft: 15170 corp: 32/486b lim: 40 exec/s: 52 rss: 75Mb L: 16/34 MS: 4 ChangeBit-ShuffleBytes-PersAutoDict-CrossOver- DE: "\001\000\000\000"- 00:08:40.515 [2024-12-05 20:28:33.849845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:1d1d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.515 [2024-12-05 20:28:33.849869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.515 #53 NEW cov: 12378 ft: 15180 corp: 33/500b lim: 40 exec/s: 53 rss: 75Mb L: 14/34 MS: 1 CMP- DE: "\001\000\000\001"- 00:08:40.515 [2024-12-05 20:28:33.909992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:911d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.515 [2024-12-05 20:28:33.910016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.515 #54 NEW cov: 12378 ft: 15192 corp: 34/510b lim: 40 exec/s: 54 rss: 75Mb L: 10/34 MS: 1 PersAutoDict- DE: "\035E\372\221>\002x\000"- 00:08:40.515 [2024-12-05 20:28:33.950096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:191d45fa cdw11:913e0278 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.515 [2024-12-05 20:28:33.950121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.774 #55 NEW cov: 12378 ft: 15215 corp: 35/520b lim: 40 exec/s: 55 rss: 75Mb L: 10/34 MS: 1 ChangeBit- 00:08:40.774 [2024-12-05 20:28:33.990210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:913bfa45 cdw11:78003e1d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.774 [2024-12-05 20:28:33.990236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:40.774 #56 NEW cov: 12378 ft: 15236 corp: 36/531b lim: 40 exec/s: 28 rss: 75Mb L: 11/34 MS: 1 ChangeByte- 00:08:40.774 #56 DONE cov: 12378 ft: 15236 corp: 36/531b lim: 40 exec/s: 28 rss: 75Mb 00:08:40.774 ###### Recommended dictionary. ###### 00:08:40.774 "\035E\372\221>\002x\000" # Uses: 6 00:08:40.774 "\001\000\000\000" # Uses: 1 00:08:40.774 "\001\000\000\001" # Uses: 0 00:08:40.774 ###### End of recommended dictionary. ###### 00:08:40.774 Done 56 runs in 2 second(s) 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:40.774 20:28:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:40.774 [2024-12-05 20:28:34.170805] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:40.774 [2024-12-05 20:28:34.170884] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843649 ] 00:08:41.033 [2024-12-05 20:28:34.378581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.033 [2024-12-05 20:28:34.419019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.292 [2024-12-05 20:28:34.478777] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.292 [2024-12-05 20:28:34.495034] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:41.292 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.292 INFO: Seed: 998550331 00:08:41.292 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:41.292 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:41.292 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:41.292 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.292 #2 INITED exec/s: 0 rss: 67Mb 00:08:41.292 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.292 This may also happen if the target rejected all inputs we tried so far 00:08:41.292 [2024-12-05 20:28:34.555128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.292 [2024-12-05 20:28:34.555164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.292 [2024-12-05 20:28:34.555224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.292 [2024-12-05 20:28:34.555240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.292 [2024-12-05 20:28:34.555301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.292 [2024-12-05 20:28:34.555316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.292 [2024-12-05 20:28:34.555379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.292 [2024-12-05 20:28:34.555393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.292 [2024-12-05 20:28:34.555451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.292 [2024-12-05 20:28:34.555464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.551 NEW_FUNC[1/716]: 0x44a818 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:41.551 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:41.551 #6 NEW cov: 12160 ft: 12162 corp: 2/41b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 4 CopyPart-ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:08:41.551 [2024-12-05 20:28:34.896042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.896103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.896183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1ce4e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.896207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.896292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e41c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.896314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.896389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.896411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.896488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.896510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.551 NEW_FUNC[1/1]: 0x1963318 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1571 00:08:41.551 #12 NEW cov: 12276 ft: 12752 corp: 3/81b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:41.551 [2024-12-05 20:28:34.965969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.965998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.966057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.966071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.966129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.966143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.966203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.966217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.551 [2024-12-05 20:28:34.966272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.551 [2024-12-05 20:28:34.966286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.825 #18 NEW cov: 12282 ft: 12972 corp: 4/121b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:08:41.825 [2024-12-05 20:28:35.006040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.825 [2024-12-05 20:28:35.006071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.825 [2024-12-05 20:28:35.006133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.825 [2024-12-05 20:28:35.006147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.825 [2024-12-05 20:28:35.006204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.825 [2024-12-05 20:28:35.006218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.825 [2024-12-05 20:28:35.006273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.825 [2024-12-05 20:28:35.006287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.825 [2024-12-05 20:28:35.006343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.825 [2024-12-05 20:28:35.006356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.825 #19 NEW cov: 12367 ft: 13320 corp: 5/161b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:41.825 [2024-12-05 20:28:35.045977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.825 [2024-12-05 20:28:35.046005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.825 [2024-12-05 20:28:35.046080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.825 [2024-12-05 20:28:35.046096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.826 [2024-12-05 20:28:35.046155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.046168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.826 [2024-12-05 20:28:35.046226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.046241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.826 #20 NEW cov: 12367 ft: 13446 corp: 6/195b lim: 40 exec/s: 0 rss: 74Mb L: 34/40 MS: 1 EraseBytes- 00:08:41.826 [2024-12-05 20:28:35.106318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.106345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.826 [2024-12-05 20:28:35.106421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.106436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.826 [2024-12-05 20:28:35.106497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.106511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.826 [2024-12-05 20:28:35.106570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.106584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.826 [2024-12-05 20:28:35.106643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.106656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.826 #21 NEW cov: 12367 ft: 13570 corp: 7/235b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ShuffleBytes- 00:08:41.826 [2024-12-05 20:28:35.146377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:dde3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.826 [2024-12-05 20:28:35.146403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.826 [2024-12-05 20:28:35.146480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e31b1c cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.146495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.827 [2024-12-05 20:28:35.146552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e3e41c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.146566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.827 [2024-12-05 20:28:35.146621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.146635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.827 [2024-12-05 20:28:35.146693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.146707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.827 #22 NEW cov: 12367 ft: 13732 corp: 8/275b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:41.827 [2024-12-05 20:28:35.206595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e81c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.206622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.827 [2024-12-05 20:28:35.206685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.206698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.827 [2024-12-05 20:28:35.206760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.206774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.827 [2024-12-05 20:28:35.206834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.827 [2024-12-05 20:28:35.206846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.828 [2024-12-05 20:28:35.206908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.828 [2024-12-05 20:28:35.206921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.828 #23 NEW cov: 12367 ft: 13760 corp: 9/315b lim: 40 exec/s: 0 rss: 74Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:41.828 [2024-12-05 20:28:35.246338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.828 [2024-12-05 20:28:35.246364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.828 [2024-12-05 20:28:35.246441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.828 [2024-12-05 20:28:35.246455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.828 [2024-12-05 20:28:35.246514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.828 [2024-12-05 20:28:35.246528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.092 #24 NEW cov: 12367 ft: 14092 corp: 10/340b lim: 40 exec/s: 0 rss: 74Mb L: 25/40 MS: 1 EraseBytes- 00:08:42.092 [2024-12-05 20:28:35.286293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:dde3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.286319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.286395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e31b1c cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.286410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.092 #25 NEW cov: 12367 ft: 14397 corp: 11/358b lim: 40 exec/s: 0 rss: 74Mb L: 18/40 MS: 1 CrossOver- 00:08:42.092 [2024-12-05 20:28:35.346789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c311c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.346815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.346875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.346889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.346953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.346967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.347022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.347036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.092 #26 NEW cov: 12367 ft: 14426 corp: 12/392b lim: 40 exec/s: 0 rss: 74Mb L: 34/40 MS: 1 ChangeByte- 00:08:42.092 [2024-12-05 20:28:35.406993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.407019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.407093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.407108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.407164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.407178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.407236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.407250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.092 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:42.092 #27 NEW cov: 12390 ft: 14463 corp: 13/426b lim: 40 exec/s: 0 rss: 74Mb L: 34/40 MS: 1 ShuffleBytes- 00:08:42.092 [2024-12-05 20:28:35.447072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.447099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.092 [2024-12-05 20:28:35.447157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.092 [2024-12-05 20:28:35.447171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.093 [2024-12-05 20:28:35.447228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.093 [2024-12-05 20:28:35.447242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.093 [2024-12-05 20:28:35.447299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.093 [2024-12-05 20:28:35.447312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.093 #28 NEW cov: 12390 ft: 14526 corp: 14/465b lim: 40 exec/s: 0 rss: 75Mb L: 39/40 MS: 1 CrossOver- 00:08:42.093 [2024-12-05 20:28:35.507402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e81c1c1c cdw11:1c761ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.093 [2024-12-05 20:28:35.507429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.093 [2024-12-05 20:28:35.507510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.093 [2024-12-05 20:28:35.507524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.093 [2024-12-05 20:28:35.507580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.093 [2024-12-05 20:28:35.507594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.093 [2024-12-05 20:28:35.507652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.093 [2024-12-05 20:28:35.507666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.093 [2024-12-05 20:28:35.507724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.093 [2024-12-05 20:28:35.507737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.352 #29 NEW cov: 12390 ft: 14562 corp: 15/505b lim: 40 exec/s: 29 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:08:42.352 [2024-12-05 20:28:35.567393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.567419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.567495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.567510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.567567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.567580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.567637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c081c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.567650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.352 #30 NEW cov: 12390 ft: 14594 corp: 16/541b lim: 40 exec/s: 30 rss: 75Mb L: 36/40 MS: 1 CopyPart- 00:08:42.352 [2024-12-05 20:28:35.607696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c97 cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.607722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.607800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.607826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.607883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.607896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.607959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.607972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.608032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.608046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.352 #36 NEW cov: 12390 ft: 14608 corp: 17/581b lim: 40 exec/s: 36 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:08:42.352 [2024-12-05 20:28:35.667826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.667852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.667928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.667943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.668000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e1e3e3e3 cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.668014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.668074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.668088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.668148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.668162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.352 #37 NEW cov: 12390 ft: 14611 corp: 18/621b lim: 40 exec/s: 37 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:42.352 [2024-12-05 20:28:35.707821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.707849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.707913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.707927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.707988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.708001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.708061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.708075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.352 #38 NEW cov: 12390 ft: 14629 corp: 19/655b lim: 40 exec/s: 38 rss: 75Mb L: 34/40 MS: 1 CMP- DE: "\000\000\000\017"- 00:08:42.352 [2024-12-05 20:28:35.747616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1cdd cdw11:1ce3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.747642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.352 [2024-12-05 20:28:35.747700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e31b1c cdw11:e3e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.352 [2024-12-05 20:28:35.747715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.612 #39 NEW cov: 12390 ft: 14656 corp: 20/673b lim: 40 exec/s: 39 rss: 75Mb L: 18/40 MS: 1 ShuffleBytes- 00:08:42.612 [2024-12-05 20:28:35.807817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1cdd cdw11:1ce30000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.807846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.807906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000fe3e3 cdw11:e3e31b1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.807920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.612 #40 NEW cov: 12390 ft: 14659 corp: 21/695b lim: 40 exec/s: 40 rss: 75Mb L: 22/40 MS: 1 PersAutoDict- DE: "\000\000\000\017"- 00:08:42.612 [2024-12-05 20:28:35.868434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e81c1c1c cdw11:1c1c1c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.868461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.868524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000fe3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.868538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.868597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.868611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.868671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.868685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.868748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.868762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.612 #41 NEW cov: 12390 ft: 14683 corp: 22/735b lim: 40 exec/s: 41 rss: 75Mb L: 40/40 MS: 1 PersAutoDict- DE: "\000\000\000\017"- 00:08:42.612 [2024-12-05 20:28:35.908309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.908339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.908405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c000000 cdw11:0f1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.908420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.908495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.908510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.612 #42 NEW cov: 12390 ft: 14746 corp: 23/764b lim: 40 exec/s: 42 rss: 75Mb L: 29/40 MS: 1 PersAutoDict- DE: "\000\000\000\017"- 00:08:42.612 [2024-12-05 20:28:35.978731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.978765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.978828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.978843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.978904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:9c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.978919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.978981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.978995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:35.979056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:35.979070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.612 #43 NEW cov: 12390 ft: 14784 corp: 24/804b lim: 40 exec/s: 43 rss: 75Mb L: 40/40 MS: 1 ChangeBit- 00:08:42.612 [2024-12-05 20:28:36.018595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:36.018622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:36.018681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:36.018695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:36.018758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:36.018790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.612 [2024-12-05 20:28:36.018850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.612 [2024-12-05 20:28:36.018863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.612 #44 NEW cov: 12390 ft: 14807 corp: 25/840b lim: 40 exec/s: 44 rss: 75Mb L: 36/40 MS: 1 EraseBytes- 00:08:42.872 [2024-12-05 20:28:36.058774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.058807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.058882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.058897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.058957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c2e1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.058971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.059028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.059042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.872 #45 NEW cov: 12390 ft: 14825 corp: 26/874b lim: 40 exec/s: 45 rss: 75Mb L: 34/40 MS: 1 ChangeByte- 00:08:42.872 [2024-12-05 20:28:36.119023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.119050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.119113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.119127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.119186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.119200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.119257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.119271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.119329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c3c1c08 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.119343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.872 #46 NEW cov: 12390 ft: 14832 corp: 27/914b lim: 40 exec/s: 46 rss: 75Mb L: 40/40 MS: 1 ChangeByte- 00:08:42.872 [2024-12-05 20:28:36.158871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.158897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.158960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c3b cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.158974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.159035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.159048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.872 #47 NEW cov: 12390 ft: 14907 corp: 28/940b lim: 40 exec/s: 47 rss: 75Mb L: 26/40 MS: 1 InsertByte- 00:08:42.872 [2024-12-05 20:28:36.199148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.199174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.199237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.199251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.199313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.199327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.199388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c9c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.199402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.872 #53 NEW cov: 12390 ft: 14912 corp: 29/974b lim: 40 exec/s: 53 rss: 75Mb L: 34/40 MS: 1 ChangeBit- 00:08:42.872 [2024-12-05 20:28:36.239081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.239107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.239169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.239183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.239259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.239273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.872 #54 NEW cov: 12390 ft: 14951 corp: 30/999b lim: 40 exec/s: 54 rss: 75Mb L: 25/40 MS: 1 CopyPart- 00:08:42.872 [2024-12-05 20:28:36.279354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c311c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.279380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.279454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.279469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.279526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.279540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.872 [2024-12-05 20:28:36.279598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.872 [2024-12-05 20:28:36.279612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.132 #55 NEW cov: 12390 ft: 14988 corp: 31/1033b lim: 40 exec/s: 55 rss: 75Mb L: 34/40 MS: 1 ShuffleBytes- 00:08:43.132 [2024-12-05 20:28:36.339663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e81c1c1c cdw11:1c761ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.132 [2024-12-05 20:28:36.339689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.132 [2024-12-05 20:28:36.339749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.132 [2024-12-05 20:28:36.339780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.132 [2024-12-05 20:28:36.339840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.132 [2024-12-05 20:28:36.339854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.132 [2024-12-05 20:28:36.339914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.132 [2024-12-05 20:28:36.339928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.132 [2024-12-05 20:28:36.339987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:1c1c1c1c cdw11:1c1c2808 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.132 [2024-12-05 20:28:36.340001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.132 #56 NEW cov: 12390 ft: 14998 corp: 32/1073b lim: 40 exec/s: 56 rss: 75Mb L: 40/40 MS: 1 ChangeBinInt- 00:08:43.132 [2024-12-05 20:28:36.399313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1ce4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.132 [2024-12-05 20:28:36.399339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.132 [2024-12-05 20:28:36.399413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e41c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.132 [2024-12-05 20:28:36.399428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.133 #57 NEW cov: 12390 ft: 15010 corp: 33/1095b lim: 40 exec/s: 57 rss: 75Mb L: 22/40 MS: 1 EraseBytes- 00:08:43.133 [2024-12-05 20:28:36.459689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c0000 cdw11:000f1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.133 [2024-12-05 20:28:36.459716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.133 [2024-12-05 20:28:36.459796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c3b cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.133 [2024-12-05 20:28:36.459811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.133 [2024-12-05 20:28:36.459872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.133 [2024-12-05 20:28:36.459886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.133 #58 NEW cov: 12390 ft: 15020 corp: 34/1121b lim: 40 exec/s: 58 rss: 75Mb L: 26/40 MS: 1 PersAutoDict- DE: "\000\000\000\017"- 00:08:43.133 [2024-12-05 20:28:36.519815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.133 [2024-12-05 20:28:36.519844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.133 [2024-12-05 20:28:36.519921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:1c1c1c5d cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.133 [2024-12-05 20:28:36.519936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.133 [2024-12-05 20:28:36.519994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.133 [2024-12-05 20:28:36.520008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.133 #59 NEW cov: 12390 ft: 15028 corp: 35/1147b lim: 40 exec/s: 29 rss: 75Mb L: 26/40 MS: 1 ChangeByte- 00:08:43.133 #59 DONE cov: 12390 ft: 15028 corp: 35/1147b lim: 40 exec/s: 29 rss: 75Mb 00:08:43.133 ###### Recommended dictionary. ###### 00:08:43.133 "\000\000\000\017" # Uses: 5 00:08:43.133 ###### End of recommended dictionary. ###### 00:08:43.133 Done 59 runs in 2 second(s) 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:43.392 20:28:36 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:43.392 [2024-12-05 20:28:36.700941] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:43.392 [2024-12-05 20:28:36.701020] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844014 ] 00:08:43.651 [2024-12-05 20:28:36.903722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.651 [2024-12-05 20:28:36.941825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.651 [2024-12-05 20:28:37.001222] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.651 [2024-12-05 20:28:37.017455] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:43.651 INFO: Running with entropic power schedule (0xFF, 100). 00:08:43.651 INFO: Seed: 3522543762 00:08:43.651 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:43.651 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:43.651 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:43.651 INFO: A corpus is not provided, starting from an empty corpus 00:08:43.651 #2 INITED exec/s: 0 rss: 67Mb 00:08:43.651 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:43.651 This may also happen if the target rejected all inputs we tried so far 00:08:43.651 [2024-12-05 20:28:37.072847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.651 [2024-12-05 20:28:37.072878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.168 NEW_FUNC[1/717]: 0x44c588 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:44.168 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:44.168 #3 NEW cov: 12159 ft: 12147 corp: 2/11b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:08:44.168 [2024-12-05 20:28:37.413804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.168 [2024-12-05 20:28:37.413852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.168 #7 NEW cov: 12274 ft: 12749 corp: 3/20b lim: 40 exec/s: 0 rss: 74Mb L: 9/10 MS: 4 CopyPart-ShuffleBytes-CopyPart-CrossOver- 00:08:44.168 [2024-12-05 20:28:37.453798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a66 cdw11:65655c9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.168 [2024-12-05 20:28:37.453825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.168 #8 NEW cov: 12280 ft: 12984 corp: 4/30b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ChangeBinInt- 00:08:44.168 [2024-12-05 20:28:37.513971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.168 [2024-12-05 20:28:37.513997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.168 #9 NEW cov: 12365 ft: 13255 corp: 5/40b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertByte- 00:08:44.168 [2024-12-05 20:28:37.574079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.168 [2024-12-05 20:28:37.574104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.168 #10 NEW cov: 12365 ft: 13341 corp: 6/50b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:08:44.427 [2024-12-05 20:28:37.614154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:63656565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.427 [2024-12-05 20:28:37.614179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.427 #11 NEW cov: 12365 ft: 13372 corp: 7/59b lim: 40 exec/s: 0 rss: 74Mb L: 9/10 MS: 1 ChangeBinInt- 00:08:44.427 [2024-12-05 20:28:37.654252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:00da9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.427 [2024-12-05 20:28:37.654277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.427 #14 NEW cov: 12365 ft: 13432 corp: 8/70b lim: 40 exec/s: 0 rss: 74Mb L: 11/11 MS: 3 EraseBytes-ChangeBit-CMP- DE: "\000@\000\000"- 00:08:44.427 [2024-12-05 20:28:37.714433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:63656565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.427 [2024-12-05 20:28:37.714458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.427 #15 NEW cov: 12365 ft: 13466 corp: 9/80b lim: 40 exec/s: 0 rss: 74Mb L: 10/11 MS: 1 InsertByte- 00:08:44.427 [2024-12-05 20:28:37.774600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9e9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.427 [2024-12-05 20:28:37.774625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.427 #16 NEW cov: 12365 ft: 13518 corp: 10/90b lim: 40 exec/s: 0 rss: 74Mb L: 10/11 MS: 1 ChangeBit- 00:08:44.427 [2024-12-05 20:28:37.814712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.427 [2024-12-05 20:28:37.814738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.427 #17 NEW cov: 12365 ft: 13554 corp: 11/99b lim: 40 exec/s: 0 rss: 74Mb L: 9/11 MS: 1 CopyPart- 00:08:44.427 [2024-12-05 20:28:37.854801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:00da9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.427 [2024-12-05 20:28:37.854826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.686 #18 NEW cov: 12365 ft: 13650 corp: 12/111b lim: 40 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 InsertByte- 00:08:44.686 [2024-12-05 20:28:37.915142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.686 [2024-12-05 20:28:37.915167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.686 [2024-12-05 20:28:37.915223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:9a9a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.686 [2024-12-05 20:28:37.915237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.686 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:44.686 #19 NEW cov: 12388 ft: 14395 corp: 13/128b lim: 40 exec/s: 0 rss: 75Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:08:44.686 [2024-12-05 20:28:37.975117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:004000da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.686 [2024-12-05 20:28:37.975143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.686 #22 NEW cov: 12388 ft: 14419 corp: 14/139b lim: 40 exec/s: 0 rss: 75Mb L: 11/17 MS: 3 EraseBytes-ChangeBit-CopyPart- 00:08:44.686 [2024-12-05 20:28:38.015208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3f0a0040 cdw11:0000da9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.686 [2024-12-05 20:28:38.015233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.686 #24 NEW cov: 12388 ft: 14428 corp: 15/151b lim: 40 exec/s: 0 rss: 75Mb L: 12/17 MS: 2 ChangeByte-CrossOver- 00:08:44.687 [2024-12-05 20:28:38.055344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:00da9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.687 [2024-12-05 20:28:38.055370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.687 #25 NEW cov: 12388 ft: 14431 corp: 16/162b lim: 40 exec/s: 25 rss: 75Mb L: 11/17 MS: 1 ChangeByte- 00:08:44.687 [2024-12-05 20:28:38.095722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:63656565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.687 [2024-12-05 20:28:38.095752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.687 [2024-12-05 20:28:38.095826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:329a1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.687 [2024-12-05 20:28:38.095851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.687 [2024-12-05 20:28:38.095905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.687 [2024-12-05 20:28:38.095918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:44.946 #26 NEW cov: 12388 ft: 14722 corp: 17/191b lim: 40 exec/s: 26 rss: 75Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:08:44.946 [2024-12-05 20:28:38.155650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:00da9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.946 [2024-12-05 20:28:38.155676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.946 #27 NEW cov: 12388 ft: 14743 corp: 18/203b lim: 40 exec/s: 27 rss: 75Mb L: 12/29 MS: 1 ShuffleBytes- 00:08:44.946 [2024-12-05 20:28:38.215917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.946 [2024-12-05 20:28:38.215943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.946 [2024-12-05 20:28:38.215998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:000a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.946 [2024-12-05 20:28:38.216012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:44.946 #28 NEW cov: 12388 ft: 14834 corp: 19/222b lim: 40 exec/s: 28 rss: 75Mb L: 19/29 MS: 1 CMP- DE: "\001\001"- 00:08:44.946 [2024-12-05 20:28:38.275914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:919a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.946 [2024-12-05 20:28:38.275939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.946 #29 NEW cov: 12388 ft: 14840 corp: 20/231b lim: 40 exec/s: 29 rss: 75Mb L: 9/29 MS: 1 ChangeBinInt- 00:08:44.946 [2024-12-05 20:28:38.336044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9e9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.946 [2024-12-05 20:28:38.336069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.946 #30 NEW cov: 12388 ft: 14857 corp: 21/241b lim: 40 exec/s: 30 rss: 75Mb L: 10/29 MS: 1 ShuffleBytes- 00:08:45.205 [2024-12-05 20:28:38.396369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.396393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.205 [2024-12-05 20:28:38.396448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:9a9a9a9a cdw11:9a9a9a24 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.396462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.205 #31 NEW cov: 12388 ft: 14865 corp: 22/259b lim: 40 exec/s: 31 rss: 75Mb L: 18/29 MS: 1 InsertByte- 00:08:45.205 [2024-12-05 20:28:38.436316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3ff6ffbf cdw11:ffff2565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.436340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.205 #32 NEW cov: 12388 ft: 14916 corp: 23/271b lim: 40 exec/s: 32 rss: 75Mb L: 12/29 MS: 1 ChangeBinInt- 00:08:45.205 [2024-12-05 20:28:38.496493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:004000da SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.496517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.205 #33 NEW cov: 12388 ft: 15008 corp: 24/282b lim: 40 exec/s: 33 rss: 75Mb L: 11/29 MS: 1 ChangeByte- 00:08:45.205 [2024-12-05 20:28:38.556955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9a9a cdw11:65656332 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.556982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.205 [2024-12-05 20:28:38.557037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:659a1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.557051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.205 [2024-12-05 20:28:38.557121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.557135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.205 #34 NEW cov: 12388 ft: 15023 corp: 25/311b lim: 40 exec/s: 34 rss: 75Mb L: 29/29 MS: 1 ShuffleBytes- 00:08:45.205 [2024-12-05 20:28:38.616978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.617004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.205 [2024-12-05 20:28:38.617076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:989a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.205 [2024-12-05 20:28:38.617090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.206 #35 NEW cov: 12388 ft: 15025 corp: 26/328b lim: 40 exec/s: 35 rss: 75Mb L: 17/29 MS: 1 ChangeBit- 00:08:45.464 [2024-12-05 20:28:38.657371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:00da9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.657397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.464 [2024-12-05 20:28:38.657470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:9a9a489a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.657484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.464 [2024-12-05 20:28:38.657536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.657550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.464 [2024-12-05 20:28:38.657608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.657622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.464 #36 NEW cov: 12388 ft: 15381 corp: 27/360b lim: 40 exec/s: 36 rss: 75Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:45.464 [2024-12-05 20:28:38.697079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a00bbff cdw11:ff256565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.697105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.464 #37 NEW cov: 12388 ft: 15403 corp: 28/371b lim: 40 exec/s: 37 rss: 75Mb L: 11/32 MS: 1 ChangeBinInt- 00:08:45.464 [2024-12-05 20:28:38.737157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1e009a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.737183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.464 #38 NEW cov: 12388 ft: 15413 corp: 29/381b lim: 40 exec/s: 38 rss: 75Mb L: 10/32 MS: 1 CMP- DE: "\036\000"- 00:08:45.464 [2024-12-05 20:28:38.777714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:00da9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.777739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.464 [2024-12-05 20:28:38.777806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:9a9a489a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.777820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.464 [2024-12-05 20:28:38.777878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00400000 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.777892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.464 [2024-12-05 20:28:38.777945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.464 [2024-12-05 20:28:38.777959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:45.464 #39 NEW cov: 12388 ft: 15425 corp: 30/413b lim: 40 exec/s: 39 rss: 75Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000@\000\000"- 00:08:45.464 [2024-12-05 20:28:38.837432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a9a9e9a cdw11:63656565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.465 [2024-12-05 20:28:38.837457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.465 #40 NEW cov: 12388 ft: 15441 corp: 31/423b lim: 40 exec/s: 40 rss: 75Mb L: 10/32 MS: 1 ChangeBit- 00:08:45.465 [2024-12-05 20:28:38.877557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a004000 cdw11:005bda9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.465 [2024-12-05 20:28:38.877583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.465 #41 NEW cov: 12388 ft: 15448 corp: 32/436b lim: 40 exec/s: 41 rss: 75Mb L: 13/32 MS: 1 InsertByte- 00:08:45.723 [2024-12-05 20:28:38.917661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3ff6ffbf cdw11:ff256569 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.723 [2024-12-05 20:28:38.917686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.723 #42 NEW cov: 12388 ft: 15449 corp: 33/447b lim: 40 exec/s: 42 rss: 76Mb L: 11/32 MS: 1 EraseBytes- 00:08:45.724 [2024-12-05 20:28:38.977796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e99a9a9a cdw11:63656565 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.724 [2024-12-05 20:28:38.977821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.724 #43 NEW cov: 12388 ft: 15465 corp: 34/456b lim: 40 exec/s: 43 rss: 76Mb L: 9/32 MS: 1 ChangeByte- 00:08:45.724 [2024-12-05 20:28:39.018084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:9a9a000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.724 [2024-12-05 20:28:39.018109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.724 [2024-12-05 20:28:39.018179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:989a9a9a cdw11:9a9a9a9a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.724 [2024-12-05 20:28:39.018194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.724 #44 NEW cov: 12388 ft: 15481 corp: 35/473b lim: 40 exec/s: 22 rss: 76Mb L: 17/32 MS: 1 CopyPart- 00:08:45.724 #44 DONE cov: 12388 ft: 15481 corp: 35/473b lim: 40 exec/s: 22 rss: 76Mb 00:08:45.724 ###### Recommended dictionary. ###### 00:08:45.724 "\000@\000\000" # Uses: 1 00:08:45.724 "\001\001" # Uses: 0 00:08:45.724 "\036\000" # Uses: 0 00:08:45.724 ###### End of recommended dictionary. ###### 00:08:45.724 Done 44 runs in 2 second(s) 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:45.983 20:28:39 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:45.983 [2024-12-05 20:28:39.223025] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:45.983 [2024-12-05 20:28:39.223100] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844388 ] 00:08:46.242 [2024-12-05 20:28:39.433830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.242 [2024-12-05 20:28:39.471225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.242 [2024-12-05 20:28:39.530458] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.242 [2024-12-05 20:28:39.546678] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:46.242 INFO: Running with entropic power schedule (0xFF, 100). 00:08:46.242 INFO: Seed: 1755577184 00:08:46.242 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:46.242 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:46.242 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:46.242 INFO: A corpus is not provided, starting from an empty corpus 00:08:46.242 #2 INITED exec/s: 0 rss: 67Mb 00:08:46.242 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:46.242 This may also happen if the target rejected all inputs we tried so far 00:08:46.242 [2024-12-05 20:28:39.612168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.242 [2024-12-05 20:28:39.612199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.501 NEW_FUNC[1/716]: 0x44e158 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:46.501 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:46.501 #4 NEW cov: 12149 ft: 12136 corp: 2/11b lim: 40 exec/s: 0 rss: 74Mb L: 10/10 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:46.759 [2024-12-05 20:28:39.953542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:39.953607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.760 [2024-12-05 20:28:39.953704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:39.953730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.760 [2024-12-05 20:28:39.953834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:39.953860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.760 #5 NEW cov: 12262 ft: 13121 corp: 3/37b lim: 40 exec/s: 0 rss: 74Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:08:46.760 [2024-12-05 20:28:40.003102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.003132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.760 #6 NEW cov: 12268 ft: 13343 corp: 4/52b lim: 40 exec/s: 0 rss: 74Mb L: 15/26 MS: 1 CopyPart- 00:08:46.760 [2024-12-05 20:28:40.063321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a424242 cdw11:424242e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.063352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.760 #7 NEW cov: 12353 ft: 13562 corp: 5/62b lim: 40 exec/s: 0 rss: 74Mb L: 10/26 MS: 1 ChangeByte- 00:08:46.760 [2024-12-05 20:28:40.103618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.103656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.760 [2024-12-05 20:28:40.103716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff4242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.103730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.760 [2024-12-05 20:28:40.103797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.103812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.760 #8 NEW cov: 12353 ft: 13670 corp: 6/89b lim: 40 exec/s: 0 rss: 74Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:08:46.760 [2024-12-05 20:28:40.163887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.163916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.760 [2024-12-05 20:28:40.163975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949437 cdw11:37373737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.163990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.760 [2024-12-05 20:28:40.164048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:37373737 cdw11:37379494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.164063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.760 [2024-12-05 20:28:40.164120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:46.760 [2024-12-05 20:28:40.164134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.018 #9 NEW cov: 12353 ft: 14184 corp: 7/126b lim: 40 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:47.018 [2024-12-05 20:28:40.223788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2fcacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.223816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.018 [2024-12-05 20:28:40.223875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacaca7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.223890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.018 #14 NEW cov: 12353 ft: 14448 corp: 8/145b lim: 40 exec/s: 0 rss: 74Mb L: 19/37 MS: 5 CrossOver-InsertByte-InsertByte-CopyPart-InsertRepeatedBytes- 00:08:47.018 [2024-12-05 20:28:40.263791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.263817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.018 #16 NEW cov: 12353 ft: 14484 corp: 9/159b lim: 40 exec/s: 0 rss: 74Mb L: 14/37 MS: 2 EraseBytes-InsertRepeatedBytes- 00:08:47.018 [2024-12-05 20:28:40.324214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.324244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.018 [2024-12-05 20:28:40.324320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff4242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.324335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.018 [2024-12-05 20:28:40.324391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42454242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.324404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.018 #17 NEW cov: 12353 ft: 14531 corp: 10/186b lim: 40 exec/s: 0 rss: 74Mb L: 27/37 MS: 1 ChangeBinInt- 00:08:47.018 [2024-12-05 20:28:40.384338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.384364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.018 [2024-12-05 20:28:40.384422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.384437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.018 [2024-12-05 20:28:40.384494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.384508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.018 #18 NEW cov: 12353 ft: 14630 corp: 11/212b lim: 40 exec/s: 0 rss: 74Mb L: 26/37 MS: 1 ShuffleBytes- 00:08:47.018 [2024-12-05 20:28:40.424198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a424242 cdw11:424242e2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.018 [2024-12-05 20:28:40.424224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.018 #19 NEW cov: 12353 ft: 14739 corp: 12/222b lim: 40 exec/s: 0 rss: 74Mb L: 10/37 MS: 1 ChangeBit- 00:08:47.296 [2024-12-05 20:28:40.464650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2fcacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.464686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.464762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacaca7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.464777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.464834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.464848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.464904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.464917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.296 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:47.296 #20 NEW cov: 12376 ft: 14769 corp: 13/257b lim: 40 exec/s: 0 rss: 74Mb L: 35/37 MS: 1 InsertRepeatedBytes- 00:08:47.296 [2024-12-05 20:28:40.524853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.524879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.524954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:94949437 cdw11:37373737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.524968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.525024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:37373737 cdw11:37379494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.525038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.525093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.525107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.296 #21 NEW cov: 12376 ft: 14773 corp: 14/294b lim: 40 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 ChangeByte- 00:08:47.296 [2024-12-05 20:28:40.584903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a949494 cdw11:e3d5cdfe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.584929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.585003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:47027800 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.585017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.585074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.585088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.296 #22 NEW cov: 12376 ft: 14801 corp: 15/320b lim: 40 exec/s: 22 rss: 74Mb L: 26/37 MS: 1 CMP- DE: "\343\325\315\376G\002x\000"- 00:08:47.296 [2024-12-05 20:28:40.624707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.624732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.296 #23 NEW cov: 12376 ft: 14837 corp: 16/335b lim: 40 exec/s: 23 rss: 75Mb L: 15/37 MS: 1 InsertByte- 00:08:47.296 [2024-12-05 20:28:40.685137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:d8ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.685162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.685234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff42 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.685249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.296 [2024-12-05 20:28:40.685307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.685323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.296 #24 NEW cov: 12376 ft: 14867 corp: 17/363b lim: 40 exec/s: 24 rss: 75Mb L: 28/37 MS: 1 InsertByte- 00:08:47.296 [2024-12-05 20:28:40.724989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.296 [2024-12-05 20:28:40.725014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.555 #25 NEW cov: 12376 ft: 14918 corp: 18/378b lim: 40 exec/s: 25 rss: 75Mb L: 15/37 MS: 1 InsertByte- 00:08:47.555 [2024-12-05 20:28:40.765457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a949437 cdw11:37373737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.555 [2024-12-05 20:28:40.765483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.555 [2024-12-05 20:28:40.765557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:37379494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.555 [2024-12-05 20:28:40.765572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.555 [2024-12-05 20:28:40.765630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:94949437 cdw11:37379494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.555 [2024-12-05 20:28:40.765643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.555 [2024-12-05 20:28:40.765700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:94949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.555 [2024-12-05 20:28:40.765714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.555 #26 NEW cov: 12376 ft: 14940 corp: 19/415b lim: 40 exec/s: 26 rss: 75Mb L: 37/37 MS: 1 CopyPart- 00:08:47.555 [2024-12-05 20:28:40.805453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.555 [2024-12-05 20:28:40.805478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.555 [2024-12-05 20:28:40.805538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff4243 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.555 [2024-12-05 20:28:40.805552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.555 [2024-12-05 20:28:40.805625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.555 [2024-12-05 20:28:40.805639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.555 #27 NEW cov: 12376 ft: 14985 corp: 20/442b lim: 40 exec/s: 27 rss: 75Mb L: 27/37 MS: 1 ChangeBit- 00:08:47.556 [2024-12-05 20:28:40.845435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.845460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.556 [2024-12-05 20:28:40.845534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:4242df53 cdw11:644b4302 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.845548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.556 #28 NEW cov: 12376 ft: 15061 corp: 21/460b lim: 40 exec/s: 28 rss: 75Mb L: 18/37 MS: 1 CMP- DE: "\337SdKC\002x\000"- 00:08:47.556 [2024-12-05 20:28:40.885524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.885549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.556 [2024-12-05 20:28:40.885621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:4242010d cdw11:42e24242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.885636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.556 #29 NEW cov: 12376 ft: 15085 corp: 22/476b lim: 40 exec/s: 29 rss: 75Mb L: 16/37 MS: 1 CMP- DE: "\001\015"- 00:08:47.556 [2024-12-05 20:28:40.925748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.925773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.556 [2024-12-05 20:28:40.925851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff4242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.925865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.556 [2024-12-05 20:28:40.925923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:e2454242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.925937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.556 #30 NEW cov: 12376 ft: 15100 corp: 23/503b lim: 40 exec/s: 30 rss: 75Mb L: 27/37 MS: 1 CrossOver- 00:08:47.556 [2024-12-05 20:28:40.986062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2fcacaca cdw11:cacacaca SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.986087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.556 [2024-12-05 20:28:40.986162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacaca7b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.986177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.556 [2024-12-05 20:28:40.986231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.986245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.556 [2024-12-05 20:28:40.986301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1a1a1a1a cdw11:1a1a1a1a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.556 [2024-12-05 20:28:40.986315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:47.815 #31 NEW cov: 12376 ft: 15117 corp: 24/539b lim: 40 exec/s: 31 rss: 75Mb L: 36/37 MS: 1 InsertByte- 00:08:47.815 [2024-12-05 20:28:41.046109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a949494 cdw11:e3d5cdfe SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.815 [2024-12-05 20:28:41.046134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.815 [2024-12-05 20:28:41.046190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:47027800 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.815 [2024-12-05 20:28:41.046207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.815 [2024-12-05 20:28:41.046265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ea949494 cdw11:94949494 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.815 [2024-12-05 20:28:41.046279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:47.815 #32 NEW cov: 12376 ft: 15123 corp: 25/565b lim: 40 exec/s: 32 rss: 75Mb L: 26/37 MS: 1 ChangeByte- 00:08:47.815 [2024-12-05 20:28:41.106037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a424242 cdw11:2b424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.815 [2024-12-05 20:28:41.106063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.815 #33 NEW cov: 12376 ft: 15132 corp: 26/580b lim: 40 exec/s: 33 rss: 75Mb L: 15/37 MS: 1 ChangeByte- 00:08:47.815 [2024-12-05 20:28:41.146089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.816 [2024-12-05 20:28:41.146114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.816 #34 NEW cov: 12376 ft: 15142 corp: 27/591b lim: 40 exec/s: 34 rss: 75Mb L: 11/37 MS: 1 EraseBytes- 00:08:47.816 [2024-12-05 20:28:41.206422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.816 [2024-12-05 20:28:41.206449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.816 [2024-12-05 20:28:41.206506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:42424241 cdw11:41414242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.816 [2024-12-05 20:28:41.206519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:47.816 #35 NEW cov: 12376 ft: 15161 corp: 28/612b lim: 40 exec/s: 35 rss: 75Mb L: 21/37 MS: 1 CopyPart- 00:08:48.075 [2024-12-05 20:28:41.266737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.075 [2024-12-05 20:28:41.266768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.075 [2024-12-05 20:28:41.266826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff4242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.075 [2024-12-05 20:28:41.266840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.075 [2024-12-05 20:28:41.266896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:e25d4542 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.075 [2024-12-05 20:28:41.266910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.075 #36 NEW cov: 12376 ft: 15191 corp: 29/640b lim: 40 exec/s: 36 rss: 75Mb L: 28/37 MS: 1 InsertByte- 00:08:48.075 [2024-12-05 20:28:41.327049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.075 [2024-12-05 20:28:41.327077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.076 [2024-12-05 20:28:41.327135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff4242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.327153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.076 [2024-12-05 20:28:41.327206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42454242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.327220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.076 [2024-12-05 20:28:41.327274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:42421616 cdw11:16161616 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.327287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.076 #37 NEW cov: 12376 ft: 15242 corp: 30/675b lim: 40 exec/s: 37 rss: 75Mb L: 35/37 MS: 1 InsertRepeatedBytes- 00:08:48.076 [2024-12-05 20:28:41.366714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.366740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.076 #38 NEW cov: 12376 ft: 15256 corp: 31/690b lim: 40 exec/s: 38 rss: 75Mb L: 15/37 MS: 1 InsertByte- 00:08:48.076 [2024-12-05 20:28:41.407127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffffe1ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.407152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.076 [2024-12-05 20:28:41.407223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff42 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.407237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.076 [2024-12-05 20:28:41.407293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:42424242 cdw11:42424542 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.407307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.076 #39 NEW cov: 12376 ft: 15265 corp: 32/718b lim: 40 exec/s: 39 rss: 75Mb L: 28/37 MS: 1 InsertByte- 00:08:48.076 [2024-12-05 20:28:41.446959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42422b cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.446984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.076 #40 NEW cov: 12376 ft: 15342 corp: 33/732b lim: 40 exec/s: 40 rss: 75Mb L: 14/37 MS: 1 EraseBytes- 00:08:48.076 [2024-12-05 20:28:41.507364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a42ffff cdw11:ffff4242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.507390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.076 [2024-12-05 20:28:41.507451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:42ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.507465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.076 [2024-12-05 20:28:41.507522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ff424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.076 [2024-12-05 20:28:41.507536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.335 #41 NEW cov: 12376 ft: 15363 corp: 34/762b lim: 40 exec/s: 41 rss: 75Mb L: 30/37 MS: 1 CopyPart- 00:08:48.335 [2024-12-05 20:28:41.547592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a424242 cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.335 [2024-12-05 20:28:41.547618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.335 [2024-12-05 20:28:41.547675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.335 [2024-12-05 20:28:41.547689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.335 [2024-12-05 20:28:41.547750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:42424242 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.335 [2024-12-05 20:28:41.547764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:48.335 [2024-12-05 20:28:41.547816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:42424245 cdw11:424242df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.335 [2024-12-05 20:28:41.547829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:48.335 #42 NEW cov: 12376 ft: 15423 corp: 35/801b lim: 40 exec/s: 21 rss: 75Mb L: 39/39 MS: 1 CrossOver- 00:08:48.335 #42 DONE cov: 12376 ft: 15423 corp: 35/801b lim: 40 exec/s: 21 rss: 75Mb 00:08:48.335 ###### Recommended dictionary. ###### 00:08:48.335 "\343\325\315\376G\002x\000" # Uses: 0 00:08:48.335 "\337SdKC\002x\000" # Uses: 0 00:08:48.335 "\001\015" # Uses: 0 00:08:48.335 ###### End of recommended dictionary. ###### 00:08:48.335 Done 42 runs in 2 second(s) 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:48.335 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:48.336 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:48.336 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:48.336 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:48.336 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:48.336 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:48.336 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:48.336 20:28:41 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:48.336 [2024-12-05 20:28:41.748331] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:48.336 [2024-12-05 20:28:41.748416] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844753 ] 00:08:48.604 [2024-12-05 20:28:41.943211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.604 [2024-12-05 20:28:41.980133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.865 [2024-12-05 20:28:42.039361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.865 [2024-12-05 20:28:42.055592] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:48.865 INFO: Running with entropic power schedule (0xFF, 100). 00:08:48.865 INFO: Seed: 4265583078 00:08:48.865 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:48.865 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:48.865 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:48.865 INFO: A corpus is not provided, starting from an empty corpus 00:08:48.865 #2 INITED exec/s: 0 rss: 67Mb 00:08:48.865 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:48.865 This may also happen if the target rejected all inputs we tried so far 00:08:48.865 [2024-12-05 20:28:42.111047] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:48.865 [2024-12-05 20:28:42.111080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.122 NEW_FUNC[1/717]: 0x44fd28 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:49.122 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:49.122 #3 NEW cov: 12143 ft: 12139 corp: 2/10b lim: 35 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 CMP- DE: "\377\377\377\377\377\377\377G"- 00:08:49.122 [2024-12-05 20:28:42.452079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.122 [2024-12-05 20:28:42.452120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.122 [2024-12-05 20:28:42.452176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.122 [2024-12-05 20:28:42.452192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.122 #4 NEW cov: 12256 ft: 13360 corp: 3/26b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 CopyPart- 00:08:49.122 [2024-12-05 20:28:42.511962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.122 [2024-12-05 20:28:42.511990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.122 #5 NEW cov: 12262 ft: 13708 corp: 4/35b lim: 35 exec/s: 0 rss: 74Mb L: 9/16 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377G"- 00:08:49.122 [2024-12-05 20:28:42.552029] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.122 [2024-12-05 20:28:42.552055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.380 #6 NEW cov: 12347 ft: 13952 corp: 5/44b lim: 35 exec/s: 0 rss: 74Mb L: 9/16 MS: 1 CopyPart- 00:08:49.380 NEW_FUNC[1/2]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:49.380 NEW_FUNC[2/2]: 0x1391168 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1768 00:08:49.380 #7 NEW cov: 12380 ft: 14083 corp: 6/53b lim: 35 exec/s: 0 rss: 74Mb L: 9/16 MS: 1 InsertRepeatedBytes- 00:08:49.380 [2024-12-05 20:28:42.642427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.642455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.380 [2024-12-05 20:28:42.642512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.642529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.380 #8 NEW cov: 12380 ft: 14134 corp: 7/69b lim: 35 exec/s: 0 rss: 74Mb L: 16/16 MS: 1 ShuffleBytes- 00:08:49.380 [2024-12-05 20:28:42.702413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.702441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.380 #9 NEW cov: 12380 ft: 14219 corp: 8/78b lim: 35 exec/s: 0 rss: 74Mb L: 9/16 MS: 1 ChangeBinInt- 00:08:49.380 [2024-12-05 20:28:42.743001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.743030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.380 [2024-12-05 20:28:42.743088] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.743105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.380 [2024-12-05 20:28:42.743160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.743174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.380 [2024-12-05 20:28:42.743230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.743245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.380 #10 NEW cov: 12380 ft: 14590 corp: 9/109b lim: 35 exec/s: 0 rss: 74Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:49.380 [2024-12-05 20:28:42.782963] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.782991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.380 [2024-12-05 20:28:42.783049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.783066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.380 [2024-12-05 20:28:42.783126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.380 [2024-12-05 20:28:42.783143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.380 #11 NEW cov: 12380 ft: 14816 corp: 10/136b lim: 35 exec/s: 0 rss: 74Mb L: 27/31 MS: 1 InsertRepeatedBytes- 00:08:49.639 [2024-12-05 20:28:42.822764] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.639 [2024-12-05 20:28:42.822796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.639 #12 NEW cov: 12380 ft: 14866 corp: 11/145b lim: 35 exec/s: 0 rss: 75Mb L: 9/31 MS: 1 ChangeBit- 00:08:49.639 #13 NEW cov: 12380 ft: 14949 corp: 12/154b lim: 35 exec/s: 0 rss: 75Mb L: 9/31 MS: 1 ChangeByte- 00:08:49.639 [2024-12-05 20:28:42.943244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.639 [2024-12-05 20:28:42.943273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.639 [2024-12-05 20:28:42.943334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.639 [2024-12-05 20:28:42.943348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.639 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:49.639 #19 NEW cov: 12410 ft: 14971 corp: 13/171b lim: 35 exec/s: 0 rss: 75Mb L: 17/31 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377G"- 00:08:49.639 [2024-12-05 20:28:43.003372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.639 [2024-12-05 20:28:43.003403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.639 #20 NEW cov: 12410 ft: 15042 corp: 14/183b lim: 35 exec/s: 0 rss: 75Mb L: 12/31 MS: 1 EraseBytes- 00:08:49.639 [2024-12-05 20:28:43.063551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.639 [2024-12-05 20:28:43.063579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.639 [2024-12-05 20:28:43.063636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.639 [2024-12-05 20:28:43.063650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.897 #21 NEW cov: 12410 ft: 15063 corp: 15/200b lim: 35 exec/s: 21 rss: 75Mb L: 17/31 MS: 1 ChangeBinInt- 00:08:49.897 [2024-12-05 20:28:43.123597] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.897 [2024-12-05 20:28:43.123622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.898 #22 NEW cov: 12410 ft: 15109 corp: 16/209b lim: 35 exec/s: 22 rss: 75Mb L: 9/31 MS: 1 CMP- DE: "M>7\203D\002x\000"- 00:08:49.898 [2024-12-05 20:28:43.183742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.898 [2024-12-05 20:28:43.183772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.898 NEW_FUNC[1/1]: 0x46ed28 in feat_number_of_queues /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:318 00:08:49.898 #23 NEW cov: 12442 ft: 15194 corp: 17/221b lim: 35 exec/s: 23 rss: 75Mb L: 12/31 MS: 1 ChangeBinInt- 00:08:49.898 [2024-12-05 20:28:43.243936] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000041 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.898 [2024-12-05 20:28:43.243963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.898 #24 NEW cov: 12442 ft: 15257 corp: 18/230b lim: 35 exec/s: 24 rss: 75Mb L: 9/31 MS: 1 ShuffleBytes- 00:08:49.898 [2024-12-05 20:28:43.284383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.898 [2024-12-05 20:28:43.284416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.898 [2024-12-05 20:28:43.284473] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.898 [2024-12-05 20:28:43.284489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.898 [2024-12-05 20:28:43.284547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:49.898 [2024-12-05 20:28:43.284563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.898 #25 NEW cov: 12442 ft: 15270 corp: 19/255b lim: 35 exec/s: 25 rss: 75Mb L: 25/31 MS: 1 EraseBytes- 00:08:50.156 [2024-12-05 20:28:43.344636] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.156 [2024-12-05 20:28:43.344664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.156 [2024-12-05 20:28:43.344720] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.156 [2024-12-05 20:28:43.344735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.156 [2024-12-05 20:28:43.344794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.156 [2024-12-05 20:28:43.344810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.156 [2024-12-05 20:28:43.344867] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.156 [2024-12-05 20:28:43.344883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.157 #26 NEW cov: 12442 ft: 15317 corp: 20/286b lim: 35 exec/s: 26 rss: 75Mb L: 31/31 MS: 1 ChangeBinInt- 00:08:50.157 [2024-12-05 20:28:43.404350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.404375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.157 #27 NEW cov: 12442 ft: 15345 corp: 21/295b lim: 35 exec/s: 27 rss: 75Mb L: 9/31 MS: 1 ChangeBit- 00:08:50.157 #28 NEW cov: 12442 ft: 15397 corp: 22/304b lim: 35 exec/s: 28 rss: 75Mb L: 9/31 MS: 1 ChangeBinInt- 00:08:50.157 [2024-12-05 20:28:43.484578] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.484603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.157 #30 NEW cov: 12442 ft: 15407 corp: 23/316b lim: 35 exec/s: 30 rss: 75Mb L: 12/31 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:50.157 [2024-12-05 20:28:43.525132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.525159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.157 [2024-12-05 20:28:43.525217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.525233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.157 [2024-12-05 20:28:43.525290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.525309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.157 [2024-12-05 20:28:43.525363] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.525378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.157 #31 NEW cov: 12442 ft: 15414 corp: 24/349b lim: 35 exec/s: 31 rss: 75Mb L: 33/33 MS: 1 CopyPart- 00:08:50.157 [2024-12-05 20:28:43.585337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.585362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.157 [2024-12-05 20:28:43.585417] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.585433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.157 [2024-12-05 20:28:43.585507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.585524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.157 [2024-12-05 20:28:43.585579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.157 [2024-12-05 20:28:43.585593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.416 #32 NEW cov: 12442 ft: 15450 corp: 25/378b lim: 35 exec/s: 32 rss: 75Mb L: 29/33 MS: 1 InsertRepeatedBytes- 00:08:50.416 [2024-12-05 20:28:43.645016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.416 [2024-12-05 20:28:43.645044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.416 #33 NEW cov: 12442 ft: 15451 corp: 26/387b lim: 35 exec/s: 33 rss: 76Mb L: 9/33 MS: 1 CrossOver- 00:08:50.416 [2024-12-05 20:28:43.705319] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.416 [2024-12-05 20:28:43.705346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.416 [2024-12-05 20:28:43.705402] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.416 [2024-12-05 20:28:43.705416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.416 #34 NEW cov: 12442 ft: 15452 corp: 27/404b lim: 35 exec/s: 34 rss: 76Mb L: 17/33 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377G"- 00:08:50.416 [2024-12-05 20:28:43.765488] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000041 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.416 [2024-12-05 20:28:43.765513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.416 [2024-12-05 20:28:43.765568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000041 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.416 [2024-12-05 20:28:43.765581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.416 #35 NEW cov: 12442 ft: 15490 corp: 28/421b lim: 35 exec/s: 35 rss: 76Mb L: 17/33 MS: 1 PersAutoDict- DE: "M>7\203D\002x\000"- 00:08:50.416 [2024-12-05 20:28:43.805438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.416 [2024-12-05 20:28:43.805466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.416 #36 NEW cov: 12442 ft: 15503 corp: 29/434b lim: 35 exec/s: 36 rss: 76Mb L: 13/33 MS: 1 InsertByte- 00:08:50.675 [2024-12-05 20:28:43.865931] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.675 [2024-12-05 20:28:43.865958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.675 [2024-12-05 20:28:43.866017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.675 [2024-12-05 20:28:43.866033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.675 [2024-12-05 20:28:43.866104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.675 [2024-12-05 20:28:43.866121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.675 #37 NEW cov: 12442 ft: 15522 corp: 30/460b lim: 35 exec/s: 37 rss: 76Mb L: 26/33 MS: 1 EraseBytes- 00:08:50.675 [2024-12-05 20:28:43.926076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.675 [2024-12-05 20:28:43.926102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.675 [2024-12-05 20:28:43.926156] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.675 [2024-12-05 20:28:43.926170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.675 [2024-12-05 20:28:43.926228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.675 [2024-12-05 20:28:43.926244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.675 #38 NEW cov: 12442 ft: 15563 corp: 31/486b lim: 35 exec/s: 38 rss: 76Mb L: 26/33 MS: 1 ChangeBit- 00:08:50.675 [2024-12-05 20:28:43.985953] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.675 [2024-12-05 20:28:43.985978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.675 #44 NEW cov: 12442 ft: 15622 corp: 32/498b lim: 35 exec/s: 44 rss: 76Mb L: 12/33 MS: 1 ShuffleBytes- 00:08:50.675 [2024-12-05 20:28:44.046563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.676 [2024-12-05 20:28:44.046588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.676 [2024-12-05 20:28:44.046645] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.676 [2024-12-05 20:28:44.046660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.676 [2024-12-05 20:28:44.046716] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000036 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.676 [2024-12-05 20:28:44.046730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.676 [2024-12-05 20:28:44.046787] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000c9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.676 [2024-12-05 20:28:44.046806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:50.676 #45 NEW cov: 12442 ft: 15629 corp: 33/527b lim: 35 exec/s: 45 rss: 77Mb L: 29/33 MS: 1 ChangeBinInt- 00:08:50.676 [2024-12-05 20:28:44.106390] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.676 [2024-12-05 20:28:44.106417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.676 [2024-12-05 20:28:44.106493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000047 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:50.676 [2024-12-05 20:28:44.106509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.954 #46 NEW cov: 12442 ft: 15652 corp: 34/547b lim: 35 exec/s: 23 rss: 77Mb L: 20/33 MS: 1 CrossOver- 00:08:50.954 #46 DONE cov: 12442 ft: 15652 corp: 34/547b lim: 35 exec/s: 23 rss: 77Mb 00:08:50.954 ###### Recommended dictionary. ###### 00:08:50.954 "\377\377\377\377\377\377\377G" # Uses: 3 00:08:50.955 "M>7\203D\002x\000" # Uses: 1 00:08:50.955 ###### End of recommended dictionary. ###### 00:08:50.955 Done 46 runs in 2 second(s) 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:50.955 20:28:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:50.955 [2024-12-05 20:28:44.314327] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:50.955 [2024-12-05 20:28:44.314405] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845051 ] 00:08:51.214 [2024-12-05 20:28:44.527715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.214 [2024-12-05 20:28:44.565054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.214 [2024-12-05 20:28:44.624351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.214 [2024-12-05 20:28:44.640590] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:51.472 INFO: Running with entropic power schedule (0xFF, 100). 00:08:51.472 INFO: Seed: 2555603673 00:08:51.472 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:51.472 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:51.472 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:51.472 INFO: A corpus is not provided, starting from an empty corpus 00:08:51.472 #2 INITED exec/s: 0 rss: 67Mb 00:08:51.472 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:51.472 This may also happen if the target rejected all inputs we tried so far 00:08:51.472 [2024-12-05 20:28:44.696265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.472 [2024-12-05 20:28:44.696295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.472 [2024-12-05 20:28:44.696374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.472 [2024-12-05 20:28:44.696388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.472 [2024-12-05 20:28:44.696448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.472 [2024-12-05 20:28:44.696463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.731 NEW_FUNC[1/716]: 0x451268 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:51.731 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:51.731 #7 NEW cov: 12124 ft: 12122 corp: 2/25b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 5 CopyPart-ShuffleBytes-ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:51.731 [2024-12-05 20:28:45.017185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.017228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.731 [2024-12-05 20:28:45.017308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.017326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.731 [2024-12-05 20:28:45.017394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.017411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.731 #8 NEW cov: 12244 ft: 12812 corp: 3/49b lim: 35 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 ChangeBit- 00:08:51.731 [2024-12-05 20:28:45.077196] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.077224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.731 [2024-12-05 20:28:45.077301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.077316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.731 [2024-12-05 20:28:45.077380] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.077394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.731 #9 NEW cov: 12250 ft: 13009 corp: 4/71b lim: 35 exec/s: 0 rss: 74Mb L: 22/24 MS: 1 CrossOver- 00:08:51.731 [2024-12-05 20:28:45.117416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.117443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.731 [2024-12-05 20:28:45.117502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.117517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.731 [2024-12-05 20:28:45.117579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.117593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.731 [2024-12-05 20:28:45.117653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.731 [2024-12-05 20:28:45.117667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.731 #10 NEW cov: 12335 ft: 13712 corp: 5/101b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CopyPart- 00:08:51.989 [2024-12-05 20:28:45.177456] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.177480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.177562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.177577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.177639] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.177653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.989 #11 NEW cov: 12335 ft: 13835 corp: 6/125b lim: 35 exec/s: 0 rss: 74Mb L: 24/30 MS: 1 ChangeBit- 00:08:51.989 [2024-12-05 20:28:45.217567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.217593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.217673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.217689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.217755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.217770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.989 #12 NEW cov: 12335 ft: 13895 corp: 7/150b lim: 35 exec/s: 0 rss: 74Mb L: 25/30 MS: 1 InsertByte- 00:08:51.989 [2024-12-05 20:28:45.277838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.277866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.277944] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.277959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.278020] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.278034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.278094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.278108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:51.989 #13 NEW cov: 12335 ft: 13977 corp: 8/179b lim: 35 exec/s: 0 rss: 74Mb L: 29/30 MS: 1 CMP- DE: "\015\000\000\000"- 00:08:51.989 [2024-12-05 20:28:45.337949] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.337976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.338112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.338128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.989 NEW_FUNC[1/1]: 0x46a708 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:08:51.989 #14 NEW cov: 12373 ft: 14179 corp: 9/203b lim: 35 exec/s: 0 rss: 74Mb L: 24/30 MS: 1 CMP- DE: "\001\000\000\006"- 00:08:51.989 [2024-12-05 20:28:45.378049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.378076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.378140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.378155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.989 [2024-12-05 20:28:45.378217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.989 [2024-12-05 20:28:45.378232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.989 #15 NEW cov: 12373 ft: 14251 corp: 10/227b lim: 35 exec/s: 0 rss: 75Mb L: 24/30 MS: 1 ShuffleBytes- 00:08:52.247 [2024-12-05 20:28:45.438237] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.438268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.438329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000721 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.438343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.438404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.438419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.247 #16 NEW cov: 12373 ft: 14350 corp: 11/251b lim: 35 exec/s: 0 rss: 75Mb L: 24/30 MS: 1 ChangeByte- 00:08:52.247 [2024-12-05 20:28:45.498325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.498353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.498475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.498490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.247 #17 NEW cov: 12373 ft: 14426 corp: 12/275b lim: 35 exec/s: 0 rss: 75Mb L: 24/30 MS: 1 ChangeBinInt- 00:08:52.247 [2024-12-05 20:28:45.558550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.558577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.558640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.558654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.247 NEW_FUNC[1/2]: 0x471278 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:52.247 NEW_FUNC[2/2]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:52.247 #18 NEW cov: 12410 ft: 14491 corp: 13/299b lim: 35 exec/s: 0 rss: 75Mb L: 24/30 MS: 1 CrossOver- 00:08:52.247 [2024-12-05 20:28:45.598734] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.598773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.598854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.598869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.598934] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.598948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.599008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.599022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.247 #19 NEW cov: 12410 ft: 14495 corp: 14/329b lim: 35 exec/s: 0 rss: 75Mb L: 30/30 MS: 1 ChangeBit- 00:08:52.247 [2024-12-05 20:28:45.658781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.658807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.658865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.658880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.247 [2024-12-05 20:28:45.658941] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.247 [2024-12-05 20:28:45.658958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.247 #25 NEW cov: 12410 ft: 14500 corp: 15/354b lim: 35 exec/s: 25 rss: 75Mb L: 25/30 MS: 1 InsertByte- 00:08:52.506 [2024-12-05 20:28:45.699006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.699031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.699105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.699120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.699179] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.699193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.699251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.699265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.506 #27 NEW cov: 12410 ft: 14512 corp: 16/382b lim: 35 exec/s: 27 rss: 75Mb L: 28/30 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:52.506 [2024-12-05 20:28:45.739046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.739073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.739137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.739151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.739213] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.739227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.506 #28 NEW cov: 12410 ft: 14561 corp: 17/404b lim: 35 exec/s: 28 rss: 75Mb L: 22/30 MS: 1 ShuffleBytes- 00:08:52.506 [2024-12-05 20:28:45.779267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.779292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.779353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.779367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.779429] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.779442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.779502] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.779515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.506 #29 NEW cov: 12410 ft: 14592 corp: 18/432b lim: 35 exec/s: 29 rss: 75Mb L: 28/30 MS: 1 ChangeBit- 00:08:52.506 [2024-12-05 20:28:45.839404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.839430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.839494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.839507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.839587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.839601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.839664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.839678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.506 #30 NEW cov: 12410 ft: 14615 corp: 19/460b lim: 35 exec/s: 30 rss: 75Mb L: 28/30 MS: 1 PersAutoDict- DE: "\001\000\000\006"- 00:08:52.506 [2024-12-05 20:28:45.899547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.899572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.899633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.899646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.899709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.899723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.899791] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.899805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.506 #31 NEW cov: 12410 ft: 14643 corp: 20/489b lim: 35 exec/s: 31 rss: 75Mb L: 29/30 MS: 1 InsertByte- 00:08:52.506 [2024-12-05 20:28:45.939555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.939581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.939644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.939658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.506 [2024-12-05 20:28:45.939723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.506 [2024-12-05 20:28:45.939738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.765 #32 NEW cov: 12410 ft: 14652 corp: 21/512b lim: 35 exec/s: 32 rss: 75Mb L: 23/30 MS: 1 InsertByte- 00:08:52.765 [2024-12-05 20:28:45.979615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:45.979640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:45.979710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:45.979724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:45.979792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:45.979806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.765 #33 NEW cov: 12410 ft: 14677 corp: 22/537b lim: 35 exec/s: 33 rss: 75Mb L: 25/30 MS: 1 CopyPart- 00:08:52.765 [2024-12-05 20:28:46.039942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.039968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.040028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.040042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.040100] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.040114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.040173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.040187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.765 #34 NEW cov: 12410 ft: 14685 corp: 23/565b lim: 35 exec/s: 34 rss: 75Mb L: 28/30 MS: 1 ChangeBinInt- 00:08:52.765 [2024-12-05 20:28:46.080041] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.080066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.080130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.080144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.080208] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007c3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.080221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.080281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.080294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.765 #35 NEW cov: 12410 ft: 14734 corp: 24/598b lim: 35 exec/s: 35 rss: 75Mb L: 33/33 MS: 1 CMP- DE: "\015\303\372\\F\002x\000"- 00:08:52.765 [2024-12-05 20:28:46.120166] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.120191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.120250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.120267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.120345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.120360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.120422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.120436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.765 #36 NEW cov: 12410 ft: 14746 corp: 25/631b lim: 35 exec/s: 36 rss: 75Mb L: 33/33 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\006"- 00:08:52.765 [2024-12-05 20:28:46.180320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.180345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.180408] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.180422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.180483] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.180497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.765 [2024-12-05 20:28:46.180558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.765 [2024-12-05 20:28:46.180572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.024 #37 NEW cov: 12410 ft: 14845 corp: 26/664b lim: 35 exec/s: 37 rss: 75Mb L: 33/33 MS: 1 ChangeBit- 00:08:53.024 [2024-12-05 20:28:46.240356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.240380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.240445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007f7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.240458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.240521] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.240535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.024 #38 NEW cov: 12410 ft: 14859 corp: 27/689b lim: 35 exec/s: 38 rss: 75Mb L: 25/33 MS: 1 ChangeBit- 00:08:53.024 [2024-12-05 20:28:46.280596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.280620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.280686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.280700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.280767] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.280784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.280846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.280860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.024 #39 NEW cov: 12410 ft: 14879 corp: 28/719b lim: 35 exec/s: 39 rss: 75Mb L: 30/33 MS: 1 ChangeByte- 00:08:53.024 [2024-12-05 20:28:46.320747] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.320772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.320836] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.320850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.320916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007c3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.320930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.320994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.321008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.024 #40 NEW cov: 12410 ft: 14886 corp: 29/753b lim: 35 exec/s: 40 rss: 76Mb L: 34/34 MS: 1 InsertByte- 00:08:53.024 [2024-12-05 20:28:46.380749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.380791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.380855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.380869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.380945] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.380960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.024 #41 NEW cov: 12410 ft: 14887 corp: 30/775b lim: 35 exec/s: 41 rss: 76Mb L: 22/34 MS: 1 ChangeBinInt- 00:08:53.024 [2024-12-05 20:28:46.420754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.420779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.024 [2024-12-05 20:28:46.420843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.024 [2024-12-05 20:28:46.420857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.283 #42 NEW cov: 12410 ft: 15026 corp: 31/792b lim: 35 exec/s: 42 rss: 76Mb L: 17/34 MS: 1 EraseBytes- 00:08:53.283 [2024-12-05 20:28:46.481201] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.283 [2024-12-05 20:28:46.481231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.481310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.481325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.481388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.481402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.481462] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.481477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.284 #43 NEW cov: 12410 ft: 15029 corp: 32/822b lim: 35 exec/s: 43 rss: 76Mb L: 30/34 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\006"- 00:08:53.284 [2024-12-05 20:28:46.521199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000001e9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.521225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.521305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.521320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.521382] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.521396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.284 #44 NEW cov: 12410 ft: 15031 corp: 33/847b lim: 35 exec/s: 44 rss: 76Mb L: 25/34 MS: 1 InsertByte- 00:08:53.284 [2024-12-05 20:28:46.561247] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.561273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.561336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.561351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.561418] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.561432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.284 #45 NEW cov: 12410 ft: 15041 corp: 34/869b lim: 35 exec/s: 45 rss: 76Mb L: 22/34 MS: 1 EraseBytes- 00:08:53.284 [2024-12-05 20:28:46.601347] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000002d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.601373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.601436] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.601450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.601512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007df SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.601529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.284 #46 NEW cov: 12410 ft: 15059 corp: 35/890b lim: 35 exec/s: 46 rss: 76Mb L: 21/34 MS: 1 EraseBytes- 00:08:53.284 [2024-12-05 20:28:46.641625] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.641651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.641714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.641729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.641813] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.641827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.284 [2024-12-05 20:28:46.641890] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.284 [2024-12-05 20:28:46.641903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.284 #47 NEW cov: 12410 ft: 15092 corp: 36/922b lim: 35 exec/s: 23 rss: 76Mb L: 32/34 MS: 1 CopyPart- 00:08:53.284 #47 DONE cov: 12410 ft: 15092 corp: 36/922b lim: 35 exec/s: 23 rss: 76Mb 00:08:53.284 ###### Recommended dictionary. ###### 00:08:53.284 "\015\000\000\000" # Uses: 0 00:08:53.284 "\001\000\000\006" # Uses: 1 00:08:53.284 "\015\303\372\\F\002x\000" # Uses: 0 00:08:53.284 "\001\000\000\000\000\000\000\006" # Uses: 1 00:08:53.284 ###### End of recommended dictionary. ###### 00:08:53.284 Done 47 runs in 2 second(s) 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:53.543 20:28:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:53.543 [2024-12-05 20:28:46.833890] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:53.543 [2024-12-05 20:28:46.833966] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845332 ] 00:08:53.803 [2024-12-05 20:28:47.048624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.803 [2024-12-05 20:28:47.087696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.803 [2024-12-05 20:28:47.147296] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.803 [2024-12-05 20:28:47.163529] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:53.803 INFO: Running with entropic power schedule (0xFF, 100). 00:08:53.803 INFO: Seed: 781664087 00:08:53.803 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:53.803 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:53.803 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:53.803 INFO: A corpus is not provided, starting from an empty corpus 00:08:53.803 #2 INITED exec/s: 0 rss: 67Mb 00:08:53.803 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:53.803 This may also happen if the target rejected all inputs we tried so far 00:08:53.803 [2024-12-05 20:28:47.234095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.803 [2024-12-05 20:28:47.234148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:53.803 [2024-12-05 20:28:47.234249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.803 [2024-12-05 20:28:47.234273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.326 NEW_FUNC[1/717]: 0x452728 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:54.326 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:54.326 #4 NEW cov: 12235 ft: 12236 corp: 2/49b lim: 105 exec/s: 0 rss: 74Mb L: 48/48 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:54.326 [2024-12-05 20:28:47.574630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.326 [2024-12-05 20:28:47.574673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.326 [2024-12-05 20:28:47.574751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.326 [2024-12-05 20:28:47.574771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.326 #5 NEW cov: 12348 ft: 12837 corp: 3/97b lim: 105 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:08:54.327 [2024-12-05 20:28:47.644722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.327 [2024-12-05 20:28:47.644756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.327 [2024-12-05 20:28:47.644834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.327 [2024-12-05 20:28:47.644857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.327 #6 NEW cov: 12354 ft: 13110 corp: 4/145b lim: 105 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 ChangeBinInt- 00:08:54.327 [2024-12-05 20:28:47.694874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.327 [2024-12-05 20:28:47.694906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.327 [2024-12-05 20:28:47.694997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.327 [2024-12-05 20:28:47.695014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.327 #12 NEW cov: 12439 ft: 13363 corp: 5/193b lim: 105 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 ShuffleBytes- 00:08:54.584 [2024-12-05 20:28:47.765247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.584 [2024-12-05 20:28:47.765277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.584 [2024-12-05 20:28:47.765362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.584 [2024-12-05 20:28:47.765382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.584 #13 NEW cov: 12439 ft: 13462 corp: 6/241b lim: 105 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 ShuffleBytes- 00:08:54.584 [2024-12-05 20:28:47.835420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.584 [2024-12-05 20:28:47.835450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.584 [2024-12-05 20:28:47.835520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.584 [2024-12-05 20:28:47.835541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.584 #14 NEW cov: 12439 ft: 13510 corp: 7/289b lim: 105 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 CopyPart- 00:08:54.584 [2024-12-05 20:28:47.886000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.584 [2024-12-05 20:28:47.886031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.584 [2024-12-05 20:28:47.886116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.585 [2024-12-05 20:28:47.886133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.585 [2024-12-05 20:28:47.886189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.585 [2024-12-05 20:28:47.886206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.585 #15 NEW cov: 12439 ft: 13862 corp: 8/357b lim: 105 exec/s: 0 rss: 74Mb L: 68/68 MS: 1 InsertRepeatedBytes- 00:08:54.585 [2024-12-05 20:28:47.935905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.585 [2024-12-05 20:28:47.935935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.585 [2024-12-05 20:28:47.936013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.585 [2024-12-05 20:28:47.936032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.585 #16 NEW cov: 12439 ft: 13988 corp: 9/405b lim: 105 exec/s: 0 rss: 74Mb L: 48/68 MS: 1 ChangeByte- 00:08:54.585 [2024-12-05 20:28:47.985798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.585 [2024-12-05 20:28:47.985828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.842 #17 NEW cov: 12439 ft: 14401 corp: 10/439b lim: 105 exec/s: 0 rss: 74Mb L: 34/68 MS: 1 EraseBytes- 00:08:54.842 [2024-12-05 20:28:48.056279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.842 [2024-12-05 20:28:48.056309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.842 [2024-12-05 20:28:48.056383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.842 [2024-12-05 20:28:48.056402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.842 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:54.842 #18 NEW cov: 12462 ft: 14438 corp: 11/487b lim: 105 exec/s: 0 rss: 75Mb L: 48/68 MS: 1 ChangeBinInt- 00:08:54.842 [2024-12-05 20:28:48.126686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10127624197330734220 len:35981 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.842 [2024-12-05 20:28:48.126714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.842 [2024-12-05 20:28:48.126799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10127624197330734220 len:35981 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.842 [2024-12-05 20:28:48.126818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.843 [2024-12-05 20:28:48.126910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10127624197330734220 len:35981 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.843 [2024-12-05 20:28:48.126927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:54.843 #19 NEW cov: 12462 ft: 14462 corp: 12/565b lim: 105 exec/s: 0 rss: 75Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:08:54.843 [2024-12-05 20:28:48.176717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709496063 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.843 [2024-12-05 20:28:48.176749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.843 [2024-12-05 20:28:48.176828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.843 [2024-12-05 20:28:48.176846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.843 #20 NEW cov: 12462 ft: 14508 corp: 13/613b lim: 105 exec/s: 0 rss: 75Mb L: 48/78 MS: 1 ChangeByte- 00:08:54.843 [2024-12-05 20:28:48.226978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.843 [2024-12-05 20:28:48.227006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.843 [2024-12-05 20:28:48.227080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.843 [2024-12-05 20:28:48.227102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:54.843 #21 NEW cov: 12462 ft: 14530 corp: 14/661b lim: 105 exec/s: 21 rss: 75Mb L: 48/78 MS: 1 ChangeBit- 00:08:54.843 [2024-12-05 20:28:48.277318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.843 [2024-12-05 20:28:48.277350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:54.843 [2024-12-05 20:28:48.277449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.843 [2024-12-05 20:28:48.277469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.101 #22 NEW cov: 12462 ft: 14596 corp: 15/709b lim: 105 exec/s: 22 rss: 75Mb L: 48/78 MS: 1 ShuffleBytes- 00:08:55.101 [2024-12-05 20:28:48.348007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.348042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.101 [2024-12-05 20:28:48.348108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.348128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.101 [2024-12-05 20:28:48.348189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.348207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.101 [2024-12-05 20:28:48.348295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.348315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.101 #23 NEW cov: 12462 ft: 15117 corp: 16/800b lim: 105 exec/s: 23 rss: 75Mb L: 91/91 MS: 1 CopyPart- 00:08:55.101 [2024-12-05 20:28:48.427885] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:12800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.427917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.101 [2024-12-05 20:28:48.428015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.428032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.101 #24 NEW cov: 12462 ft: 15131 corp: 17/848b lim: 105 exec/s: 24 rss: 75Mb L: 48/91 MS: 1 ChangeByte- 00:08:55.101 [2024-12-05 20:28:48.478742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2358003712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.478779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.101 [2024-12-05 20:28:48.478860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.478882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.101 [2024-12-05 20:28:48.478937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.478955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.101 [2024-12-05 20:28:48.479040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.101 [2024-12-05 20:28:48.479058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:55.101 #29 NEW cov: 12462 ft: 15201 corp: 18/941b lim: 105 exec/s: 29 rss: 75Mb L: 93/93 MS: 5 CrossOver-CrossOver-ChangeByte-CopyPart-InsertRepeatedBytes- 00:08:55.358 [2024-12-05 20:28:48.548483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.358 [2024-12-05 20:28:48.548516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.358 #30 NEW cov: 12462 ft: 15213 corp: 19/976b lim: 105 exec/s: 30 rss: 75Mb L: 35/93 MS: 1 InsertByte- 00:08:55.358 [2024-12-05 20:28:48.618808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.358 [2024-12-05 20:28:48.618837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.358 [2024-12-05 20:28:48.618910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073695526911 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.358 [2024-12-05 20:28:48.618927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.358 #31 NEW cov: 12462 ft: 15223 corp: 20/1025b lim: 105 exec/s: 31 rss: 75Mb L: 49/93 MS: 1 InsertByte- 00:08:55.358 [2024-12-05 20:28:48.688970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.358 [2024-12-05 20:28:48.688998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.358 [2024-12-05 20:28:48.689057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.358 [2024-12-05 20:28:48.689074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.358 #32 NEW cov: 12462 ft: 15256 corp: 21/1073b lim: 105 exec/s: 32 rss: 75Mb L: 48/93 MS: 1 CopyPart- 00:08:55.359 [2024-12-05 20:28:48.739141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.359 [2024-12-05 20:28:48.739170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.359 [2024-12-05 20:28:48.739247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.359 [2024-12-05 20:28:48.739264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.359 #33 NEW cov: 12462 ft: 15297 corp: 22/1121b lim: 105 exec/s: 33 rss: 75Mb L: 48/93 MS: 1 CopyPart- 00:08:55.359 [2024-12-05 20:28:48.789100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.359 [2024-12-05 20:28:48.789130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.616 #34 NEW cov: 12462 ft: 15311 corp: 23/1155b lim: 105 exec/s: 34 rss: 75Mb L: 34/93 MS: 1 ChangeByte- 00:08:55.616 [2024-12-05 20:28:48.839463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:48.839490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.616 [2024-12-05 20:28:48.839561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:48.839579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.616 #35 NEW cov: 12462 ft: 15329 corp: 24/1204b lim: 105 exec/s: 35 rss: 75Mb L: 49/93 MS: 1 InsertRepeatedBytes- 00:08:55.616 [2024-12-05 20:28:48.889719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:48.889750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.616 [2024-12-05 20:28:48.889814] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:48.889830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.616 #36 NEW cov: 12462 ft: 15357 corp: 25/1257b lim: 105 exec/s: 36 rss: 75Mb L: 53/93 MS: 1 EraseBytes- 00:08:55.616 [2024-12-05 20:28:48.960052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:48.960080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.616 [2024-12-05 20:28:48.960148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073695526911 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:48.960168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.616 #37 NEW cov: 12462 ft: 15377 corp: 26/1307b lim: 105 exec/s: 37 rss: 75Mb L: 50/93 MS: 1 InsertByte- 00:08:55.616 [2024-12-05 20:28:49.030495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:10127624197330734220 len:35981 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:49.030527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.616 [2024-12-05 20:28:49.030604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:10127624197330734220 len:35981 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:49.030624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.616 [2024-12-05 20:28:49.030708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:10127624197330734220 len:35981 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-12-05 20:28:49.030727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:55.874 #38 NEW cov: 12462 ft: 15399 corp: 27/1385b lim: 105 exec/s: 38 rss: 75Mb L: 78/93 MS: 1 ShuffleBytes- 00:08:55.874 [2024-12-05 20:28:49.080172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446514275779346431 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-12-05 20:28:49.080201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.874 #39 NEW cov: 12462 ft: 15453 corp: 28/1420b lim: 105 exec/s: 39 rss: 75Mb L: 35/93 MS: 1 ChangeByte- 00:08:55.874 [2024-12-05 20:28:49.150515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:281474976647424 len:65327 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-12-05 20:28:49.150544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.874 #42 NEW cov: 12462 ft: 15461 corp: 29/1449b lim: 105 exec/s: 42 rss: 75Mb L: 29/93 MS: 3 CrossOver-ChangeBinInt-CrossOver- 00:08:55.874 [2024-12-05 20:28:49.220964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:64001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-12-05 20:28:49.220993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:55.874 [2024-12-05 20:28:49.221085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-12-05 20:28:49.221102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:55.874 #43 NEW cov: 12462 ft: 15465 corp: 30/1497b lim: 105 exec/s: 21 rss: 75Mb L: 48/93 MS: 1 ChangeBinInt- 00:08:55.874 #43 DONE cov: 12462 ft: 15465 corp: 30/1497b lim: 105 exec/s: 21 rss: 75Mb 00:08:55.874 Done 43 runs in 2 second(s) 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:56.132 20:28:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:56.132 [2024-12-05 20:28:49.390699] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:56.132 [2024-12-05 20:28:49.390790] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845697 ] 00:08:56.389 [2024-12-05 20:28:49.599144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.389 [2024-12-05 20:28:49.638833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.389 [2024-12-05 20:28:49.698193] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.389 [2024-12-05 20:28:49.714417] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:56.389 INFO: Running with entropic power schedule (0xFF, 100). 00:08:56.389 INFO: Seed: 3334640270 00:08:56.389 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:56.389 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:56.389 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:56.389 INFO: A corpus is not provided, starting from an empty corpus 00:08:56.389 #2 INITED exec/s: 0 rss: 67Mb 00:08:56.389 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:56.389 This may also happen if the target rejected all inputs we tried so far 00:08:56.389 [2024-12-05 20:28:49.791655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.389 [2024-12-05 20:28:49.791699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:56.389 [2024-12-05 20:28:49.791772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.389 [2024-12-05 20:28:49.791794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:56.902 NEW_FUNC[1/718]: 0x455aa8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:56.902 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:56.902 #15 NEW cov: 12238 ft: 12236 corp: 2/54b lim: 120 exec/s: 0 rss: 74Mb L: 53/53 MS: 3 ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:08:56.902 [2024-12-05 20:28:50.142819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.902 [2024-12-05 20:28:50.142880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:56.902 [2024-12-05 20:28:50.142985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.902 [2024-12-05 20:28:50.143010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:56.902 [2024-12-05 20:28:50.143109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.902 [2024-12-05 20:28:50.143132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:56.902 #17 NEW cov: 12368 ft: 13337 corp: 3/133b lim: 120 exec/s: 0 rss: 74Mb L: 79/79 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:56.902 [2024-12-05 20:28:50.202598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.902 [2024-12-05 20:28:50.202630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:56.902 [2024-12-05 20:28:50.202714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.902 [2024-12-05 20:28:50.202729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:56.902 #18 NEW cov: 12374 ft: 13543 corp: 4/187b lim: 120 exec/s: 0 rss: 74Mb L: 54/79 MS: 1 InsertByte- 00:08:56.902 [2024-12-05 20:28:50.272843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.902 [2024-12-05 20:28:50.272877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:56.902 [2024-12-05 20:28:50.272961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.903 [2024-12-05 20:28:50.272982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:56.903 #20 NEW cov: 12459 ft: 13819 corp: 5/242b lim: 120 exec/s: 0 rss: 74Mb L: 55/79 MS: 2 ShuffleBytes-CrossOver- 00:08:56.903 [2024-12-05 20:28:50.322653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.903 [2024-12-05 20:28:50.322688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.159 #21 NEW cov: 12459 ft: 14715 corp: 6/278b lim: 120 exec/s: 0 rss: 74Mb L: 36/79 MS: 1 EraseBytes- 00:08:57.159 [2024-12-05 20:28:50.393291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.159 [2024-12-05 20:28:50.393323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.159 [2024-12-05 20:28:50.393385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2030043136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.159 [2024-12-05 20:28:50.393404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.159 #22 NEW cov: 12459 ft: 14786 corp: 7/334b lim: 120 exec/s: 0 rss: 74Mb L: 56/79 MS: 1 InsertByte- 00:08:57.159 [2024-12-05 20:28:50.463211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.159 [2024-12-05 20:28:50.463243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.159 #23 NEW cov: 12459 ft: 14841 corp: 8/370b lim: 120 exec/s: 0 rss: 74Mb L: 36/79 MS: 1 ChangeBinInt- 00:08:57.159 [2024-12-05 20:28:50.534435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:154618822656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.159 [2024-12-05 20:28:50.534465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.159 [2024-12-05 20:28:50.534539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.159 [2024-12-05 20:28:50.534560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.159 [2024-12-05 20:28:50.534628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.159 [2024-12-05 20:28:50.534645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.159 [2024-12-05 20:28:50.534737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.159 [2024-12-05 20:28:50.534765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:57.159 #24 NEW cov: 12459 ft: 15239 corp: 9/481b lim: 120 exec/s: 0 rss: 74Mb L: 111/111 MS: 1 CrossOver- 00:08:57.416 [2024-12-05 20:28:50.604069] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4294967296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.416 [2024-12-05 20:28:50.604098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.416 [2024-12-05 20:28:50.604192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2030043136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.416 [2024-12-05 20:28:50.604211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.416 #25 NEW cov: 12459 ft: 15279 corp: 10/537b lim: 120 exec/s: 0 rss: 74Mb L: 56/111 MS: 1 CMP- DE: "\001\022"- 00:08:57.416 [2024-12-05 20:28:50.653868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.416 [2024-12-05 20:28:50.653896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.416 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:57.416 #31 NEW cov: 12482 ft: 15362 corp: 11/573b lim: 120 exec/s: 0 rss: 74Mb L: 36/111 MS: 1 ChangeBit- 00:08:57.416 [2024-12-05 20:28:50.704089] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.416 [2024-12-05 20:28:50.704120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.416 #32 NEW cov: 12482 ft: 15461 corp: 12/609b lim: 120 exec/s: 0 rss: 74Mb L: 36/111 MS: 1 ShuffleBytes- 00:08:57.416 [2024-12-05 20:28:50.774779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.416 [2024-12-05 20:28:50.774811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.416 [2024-12-05 20:28:50.774906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:989855744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.416 [2024-12-05 20:28:50.774927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.416 #33 NEW cov: 12482 ft: 15529 corp: 13/665b lim: 120 exec/s: 33 rss: 74Mb L: 56/111 MS: 1 InsertByte- 00:08:57.416 [2024-12-05 20:28:50.824524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.416 [2024-12-05 20:28:50.824552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.673 #34 NEW cov: 12482 ft: 15566 corp: 14/702b lim: 120 exec/s: 34 rss: 75Mb L: 37/111 MS: 1 InsertByte- 00:08:57.673 [2024-12-05 20:28:50.895193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:7523377975663290472 len:26729 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.673 [2024-12-05 20:28:50.895223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.673 [2024-12-05 20:28:50.895303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:7523377975159973992 len:26729 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.673 [2024-12-05 20:28:50.895320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.673 #39 NEW cov: 12482 ft: 15611 corp: 15/772b lim: 120 exec/s: 39 rss: 75Mb L: 70/111 MS: 5 CopyPart-EraseBytes-ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:08:57.673 [2024-12-05 20:28:50.945302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.673 [2024-12-05 20:28:50.945334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.673 [2024-12-05 20:28:50.945413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.673 [2024-12-05 20:28:50.945431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.673 #40 NEW cov: 12482 ft: 15641 corp: 16/829b lim: 120 exec/s: 40 rss: 75Mb L: 57/111 MS: 1 CopyPart- 00:08:57.673 [2024-12-05 20:28:50.995175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.673 [2024-12-05 20:28:50.995205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.673 #41 NEW cov: 12482 ft: 15655 corp: 17/866b lim: 120 exec/s: 41 rss: 75Mb L: 37/111 MS: 1 InsertByte- 00:08:57.673 [2024-12-05 20:28:51.065605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.673 [2024-12-05 20:28:51.065638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.931 #42 NEW cov: 12482 ft: 15678 corp: 18/903b lim: 120 exec/s: 42 rss: 75Mb L: 37/111 MS: 1 CopyPart- 00:08:57.931 [2024-12-05 20:28:51.135788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.931 [2024-12-05 20:28:51.135821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.931 #43 NEW cov: 12482 ft: 15699 corp: 19/943b lim: 120 exec/s: 43 rss: 75Mb L: 40/111 MS: 1 CMP- DE: "\001\000\000\016"- 00:08:57.931 [2024-12-05 20:28:51.186621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.931 [2024-12-05 20:28:51.186655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.931 [2024-12-05 20:28:51.186717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2030043136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.931 [2024-12-05 20:28:51.186736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:57.931 [2024-12-05 20:28:51.186823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.931 [2024-12-05 20:28:51.186843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:57.931 #44 NEW cov: 12482 ft: 15728 corp: 20/1026b lim: 120 exec/s: 44 rss: 75Mb L: 83/111 MS: 1 CrossOver- 00:08:57.931 [2024-12-05 20:28:51.256303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.931 [2024-12-05 20:28:51.256333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.931 #45 NEW cov: 12482 ft: 15765 corp: 21/1063b lim: 120 exec/s: 45 rss: 75Mb L: 37/111 MS: 1 ChangeBit- 00:08:57.931 [2024-12-05 20:28:51.326552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:71494644084515840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.931 [2024-12-05 20:28:51.326584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:57.931 #46 NEW cov: 12482 ft: 15774 corp: 22/1100b lim: 120 exec/s: 46 rss: 75Mb L: 37/111 MS: 1 ChangeBinInt- 00:08:58.189 [2024-12-05 20:28:51.376688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.189 [2024-12-05 20:28:51.376719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.189 #47 NEW cov: 12482 ft: 15776 corp: 23/1125b lim: 120 exec/s: 47 rss: 75Mb L: 25/111 MS: 1 EraseBytes- 00:08:58.189 [2024-12-05 20:28:51.447039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.189 [2024-12-05 20:28:51.447068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.189 #48 NEW cov: 12482 ft: 15796 corp: 24/1163b lim: 120 exec/s: 48 rss: 75Mb L: 38/111 MS: 1 InsertByte- 00:08:58.189 [2024-12-05 20:28:51.497301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.189 [2024-12-05 20:28:51.497329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.189 #49 NEW cov: 12482 ft: 15865 corp: 25/1200b lim: 120 exec/s: 49 rss: 75Mb L: 37/111 MS: 1 InsertByte- 00:08:58.189 [2024-12-05 20:28:51.547463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2949120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.189 [2024-12-05 20:28:51.547493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.189 #50 NEW cov: 12482 ft: 15873 corp: 26/1237b lim: 120 exec/s: 50 rss: 75Mb L: 37/111 MS: 1 ChangeByte- 00:08:58.189 [2024-12-05 20:28:51.617752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2097152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.189 [2024-12-05 20:28:51.617783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.446 #51 NEW cov: 12482 ft: 15900 corp: 27/1273b lim: 120 exec/s: 51 rss: 75Mb L: 36/111 MS: 1 ChangeBit- 00:08:58.446 [2024-12-05 20:28:51.668327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.446 [2024-12-05 20:28:51.668359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.446 [2024-12-05 20:28:51.668425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8718968878589280256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.446 [2024-12-05 20:28:51.668444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.446 #52 NEW cov: 12482 ft: 15912 corp: 28/1338b lim: 120 exec/s: 52 rss: 75Mb L: 65/111 MS: 1 CrossOver- 00:08:58.446 [2024-12-05 20:28:51.738494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.446 [2024-12-05 20:28:51.738524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.446 [2024-12-05 20:28:51.738588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.446 [2024-12-05 20:28:51.738610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.446 #53 NEW cov: 12482 ft: 15917 corp: 29/1402b lim: 120 exec/s: 26 rss: 75Mb L: 64/111 MS: 1 CopyPart- 00:08:58.446 #53 DONE cov: 12482 ft: 15917 corp: 29/1402b lim: 120 exec/s: 26 rss: 75Mb 00:08:58.446 ###### Recommended dictionary. ###### 00:08:58.447 "\001\022" # Uses: 1 00:08:58.447 "\001\000\000\016" # Uses: 0 00:08:58.447 ###### End of recommended dictionary. ###### 00:08:58.447 Done 53 runs in 2 second(s) 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:58.447 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:58.704 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:58.704 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:58.704 20:28:51 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:58.704 [2024-12-05 20:28:51.915192] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:08:58.704 [2024-12-05 20:28:51.915283] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846071 ] 00:08:58.704 [2024-12-05 20:28:52.117412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.962 [2024-12-05 20:28:52.155486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.962 [2024-12-05 20:28:52.214565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.963 [2024-12-05 20:28:52.230788] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:58.963 INFO: Running with entropic power schedule (0xFF, 100). 00:08:58.963 INFO: Seed: 1553856923 00:08:58.963 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:08:58.963 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:08:58.963 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:58.963 INFO: A corpus is not provided, starting from an empty corpus 00:08:58.963 #2 INITED exec/s: 0 rss: 67Mb 00:08:58.963 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:58.963 This may also happen if the target rejected all inputs we tried so far 00:08:58.963 [2024-12-05 20:28:52.279402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:58.963 [2024-12-05 20:28:52.279434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:58.963 [2024-12-05 20:28:52.279467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:58.963 [2024-12-05 20:28:52.279482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:58.963 [2024-12-05 20:28:52.279537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:58.963 [2024-12-05 20:28:52.279552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.221 NEW_FUNC[1/716]: 0x459398 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:59.221 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:59.221 #8 NEW cov: 12199 ft: 12198 corp: 2/65b lim: 100 exec/s: 0 rss: 74Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:08:59.221 [2024-12-05 20:28:52.610260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.221 [2024-12-05 20:28:52.610300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.221 [2024-12-05 20:28:52.610352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.221 [2024-12-05 20:28:52.610368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.221 [2024-12-05 20:28:52.610418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.221 [2024-12-05 20:28:52.610433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.221 [2024-12-05 20:28:52.610482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:59.221 [2024-12-05 20:28:52.610497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:59.221 #10 NEW cov: 12312 ft: 12931 corp: 3/160b lim: 100 exec/s: 0 rss: 74Mb L: 95/95 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:59.221 [2024-12-05 20:28:52.650167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.221 [2024-12-05 20:28:52.650196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.221 [2024-12-05 20:28:52.650234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.221 [2024-12-05 20:28:52.650248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.221 [2024-12-05 20:28:52.650300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.221 [2024-12-05 20:28:52.650315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.480 #16 NEW cov: 12318 ft: 13062 corp: 4/224b lim: 100 exec/s: 0 rss: 74Mb L: 64/95 MS: 1 ChangeBinInt- 00:08:59.480 [2024-12-05 20:28:52.710356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.480 [2024-12-05 20:28:52.710383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.710425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.480 [2024-12-05 20:28:52.710440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.710490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.480 [2024-12-05 20:28:52.710504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.480 #17 NEW cov: 12403 ft: 13335 corp: 5/288b lim: 100 exec/s: 0 rss: 74Mb L: 64/95 MS: 1 ChangeBit- 00:08:59.480 [2024-12-05 20:28:52.770443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.480 [2024-12-05 20:28:52.770469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.770511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.480 [2024-12-05 20:28:52.770526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.770574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.480 [2024-12-05 20:28:52.770589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.480 #18 NEW cov: 12403 ft: 13420 corp: 6/352b lim: 100 exec/s: 0 rss: 74Mb L: 64/95 MS: 1 CopyPart- 00:08:59.480 [2024-12-05 20:28:52.830605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.480 [2024-12-05 20:28:52.830632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.830677] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.480 [2024-12-05 20:28:52.830692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.830742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.480 [2024-12-05 20:28:52.830762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.480 #19 NEW cov: 12403 ft: 13662 corp: 7/416b lim: 100 exec/s: 0 rss: 74Mb L: 64/95 MS: 1 ChangeByte- 00:08:59.480 [2024-12-05 20:28:52.870724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.480 [2024-12-05 20:28:52.870754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.870807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.480 [2024-12-05 20:28:52.870821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.870872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.480 [2024-12-05 20:28:52.870904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.480 #20 NEW cov: 12403 ft: 13721 corp: 8/480b lim: 100 exec/s: 0 rss: 74Mb L: 64/95 MS: 1 ShuffleBytes- 00:08:59.480 [2024-12-05 20:28:52.910966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.480 [2024-12-05 20:28:52.910992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.911038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.480 [2024-12-05 20:28:52.911052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.911102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.480 [2024-12-05 20:28:52.911117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.480 [2024-12-05 20:28:52.911166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:59.480 [2024-12-05 20:28:52.911180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:59.739 #21 NEW cov: 12403 ft: 13831 corp: 9/577b lim: 100 exec/s: 0 rss: 74Mb L: 97/97 MS: 1 CrossOver- 00:08:59.739 [2024-12-05 20:28:52.971025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.739 [2024-12-05 20:28:52.971050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.739 [2024-12-05 20:28:52.971085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.739 [2024-12-05 20:28:52.971099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.739 [2024-12-05 20:28:52.971151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.739 [2024-12-05 20:28:52.971166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.739 #22 NEW cov: 12403 ft: 13893 corp: 10/641b lim: 100 exec/s: 0 rss: 74Mb L: 64/97 MS: 1 ChangeByte- 00:08:59.739 [2024-12-05 20:28:53.030945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.739 [2024-12-05 20:28:53.030970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.739 #23 NEW cov: 12403 ft: 14312 corp: 11/663b lim: 100 exec/s: 0 rss: 74Mb L: 22/97 MS: 1 CrossOver- 00:08:59.739 [2024-12-05 20:28:53.071240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.739 [2024-12-05 20:28:53.071265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.739 [2024-12-05 20:28:53.071325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.739 [2024-12-05 20:28:53.071340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.739 [2024-12-05 20:28:53.071390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.739 [2024-12-05 20:28:53.071404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.739 #24 NEW cov: 12403 ft: 14370 corp: 12/727b lim: 100 exec/s: 0 rss: 74Mb L: 64/97 MS: 1 ChangeBit- 00:08:59.739 [2024-12-05 20:28:53.111170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.739 [2024-12-05 20:28:53.111197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.739 #25 NEW cov: 12403 ft: 14394 corp: 13/749b lim: 100 exec/s: 0 rss: 74Mb L: 22/97 MS: 1 ShuffleBytes- 00:08:59.739 [2024-12-05 20:28:53.171531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.739 [2024-12-05 20:28:53.171558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.739 [2024-12-05 20:28:53.171593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.739 [2024-12-05 20:28:53.171608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.739 [2024-12-05 20:28:53.171660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.739 [2024-12-05 20:28:53.171675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.998 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:08:59.998 #26 NEW cov: 12426 ft: 14474 corp: 14/821b lim: 100 exec/s: 0 rss: 75Mb L: 72/97 MS: 1 CMP- DE: "\001x\002I\341\373&\222"- 00:08:59.998 [2024-12-05 20:28:53.211637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.998 [2024-12-05 20:28:53.211664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.211708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.998 [2024-12-05 20:28:53.211723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.211767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.998 [2024-12-05 20:28:53.211783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.998 #27 NEW cov: 12426 ft: 14506 corp: 15/885b lim: 100 exec/s: 0 rss: 75Mb L: 64/97 MS: 1 PersAutoDict- DE: "\001x\002I\341\373&\222"- 00:08:59.998 [2024-12-05 20:28:53.251859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.998 [2024-12-05 20:28:53.251884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.251937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.998 [2024-12-05 20:28:53.251949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.251997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.998 [2024-12-05 20:28:53.252012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.252060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:59.998 [2024-12-05 20:28:53.252074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:59.998 #28 NEW cov: 12426 ft: 14588 corp: 16/983b lim: 100 exec/s: 28 rss: 75Mb L: 98/98 MS: 1 CrossOver- 00:08:59.998 [2024-12-05 20:28:53.311916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.998 [2024-12-05 20:28:53.311941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.311988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.998 [2024-12-05 20:28:53.312004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.312052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.998 [2024-12-05 20:28:53.312066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.998 #29 NEW cov: 12426 ft: 14602 corp: 17/1048b lim: 100 exec/s: 29 rss: 75Mb L: 65/98 MS: 1 InsertByte- 00:08:59.998 [2024-12-05 20:28:53.352054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.998 [2024-12-05 20:28:53.352079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.352114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.998 [2024-12-05 20:28:53.352128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.352180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.998 [2024-12-05 20:28:53.352194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.998 #30 NEW cov: 12426 ft: 14644 corp: 18/1113b lim: 100 exec/s: 30 rss: 75Mb L: 65/98 MS: 1 InsertByte- 00:08:59.998 [2024-12-05 20:28:53.392156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.998 [2024-12-05 20:28:53.392181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.392228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.998 [2024-12-05 20:28:53.392242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.392291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.998 [2024-12-05 20:28:53.392306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:59.998 #31 NEW cov: 12426 ft: 14661 corp: 19/1177b lim: 100 exec/s: 31 rss: 75Mb L: 64/98 MS: 1 ShuffleBytes- 00:08:59.998 [2024-12-05 20:28:53.432258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:59.998 [2024-12-05 20:28:53.432285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.432333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:59.998 [2024-12-05 20:28:53.432345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:59.998 [2024-12-05 20:28:53.432398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:59.998 [2024-12-05 20:28:53.432413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.257 #40 NEW cov: 12426 ft: 14668 corp: 20/1251b lim: 100 exec/s: 40 rss: 75Mb L: 74/98 MS: 4 CrossOver-ChangeByte-InsertByte-CrossOver- 00:09:00.257 [2024-12-05 20:28:53.472400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.257 [2024-12-05 20:28:53.472425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.257 [2024-12-05 20:28:53.472459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.258 [2024-12-05 20:28:53.472474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.258 [2024-12-05 20:28:53.472524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.258 [2024-12-05 20:28:53.472537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.258 #41 NEW cov: 12426 ft: 14705 corp: 21/1315b lim: 100 exec/s: 41 rss: 75Mb L: 64/98 MS: 1 ShuffleBytes- 00:09:00.258 [2024-12-05 20:28:53.512275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.258 [2024-12-05 20:28:53.512300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.258 #42 NEW cov: 12426 ft: 14774 corp: 22/1337b lim: 100 exec/s: 42 rss: 75Mb L: 22/98 MS: 1 ShuffleBytes- 00:09:00.258 [2024-12-05 20:28:53.572457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.258 [2024-12-05 20:28:53.572485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.258 #43 NEW cov: 12426 ft: 14791 corp: 23/1359b lim: 100 exec/s: 43 rss: 75Mb L: 22/98 MS: 1 ShuffleBytes- 00:09:00.258 [2024-12-05 20:28:53.632611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.258 [2024-12-05 20:28:53.632636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.258 #44 NEW cov: 12426 ft: 14829 corp: 24/1381b lim: 100 exec/s: 44 rss: 75Mb L: 22/98 MS: 1 ShuffleBytes- 00:09:00.258 [2024-12-05 20:28:53.672914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.258 [2024-12-05 20:28:53.672939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.258 [2024-12-05 20:28:53.672987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.258 [2024-12-05 20:28:53.673002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.258 [2024-12-05 20:28:53.673052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.258 [2024-12-05 20:28:53.673066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.517 #45 NEW cov: 12426 ft: 14857 corp: 25/1445b lim: 100 exec/s: 45 rss: 75Mb L: 64/98 MS: 1 CopyPart- 00:09:00.517 [2024-12-05 20:28:53.733136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.517 [2024-12-05 20:28:53.733163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.733198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.517 [2024-12-05 20:28:53.733214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.733265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.517 [2024-12-05 20:28:53.733280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.517 #46 NEW cov: 12426 ft: 14871 corp: 26/1510b lim: 100 exec/s: 46 rss: 75Mb L: 65/98 MS: 1 InsertByte- 00:09:00.517 [2024-12-05 20:28:53.773341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.517 [2024-12-05 20:28:53.773369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.773414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.517 [2024-12-05 20:28:53.773429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.773480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.517 [2024-12-05 20:28:53.773495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.773546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:00.517 [2024-12-05 20:28:53.773560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:00.517 #47 NEW cov: 12426 ft: 14900 corp: 27/1609b lim: 100 exec/s: 47 rss: 75Mb L: 99/99 MS: 1 CopyPart- 00:09:00.517 [2024-12-05 20:28:53.833527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.517 [2024-12-05 20:28:53.833554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.833599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.517 [2024-12-05 20:28:53.833614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.833663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.517 [2024-12-05 20:28:53.833678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.517 [2024-12-05 20:28:53.833730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:00.518 [2024-12-05 20:28:53.833750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:00.518 #48 NEW cov: 12426 ft: 14914 corp: 28/1704b lim: 100 exec/s: 48 rss: 75Mb L: 95/99 MS: 1 ChangeBinInt- 00:09:00.518 [2024-12-05 20:28:53.873521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.518 [2024-12-05 20:28:53.873548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.518 [2024-12-05 20:28:53.873593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.518 [2024-12-05 20:28:53.873608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.518 [2024-12-05 20:28:53.873663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.518 [2024-12-05 20:28:53.873678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.518 #49 NEW cov: 12426 ft: 14923 corp: 29/1772b lim: 100 exec/s: 49 rss: 75Mb L: 68/99 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:00.518 [2024-12-05 20:28:53.913714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.518 [2024-12-05 20:28:53.913742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.518 [2024-12-05 20:28:53.913799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.518 [2024-12-05 20:28:53.913813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.518 [2024-12-05 20:28:53.913863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.518 [2024-12-05 20:28:53.913878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.518 [2024-12-05 20:28:53.913930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:00.518 [2024-12-05 20:28:53.913945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:00.777 #50 NEW cov: 12426 ft: 14936 corp: 30/1859b lim: 100 exec/s: 50 rss: 75Mb L: 87/99 MS: 1 EraseBytes- 00:09:00.777 [2024-12-05 20:28:53.973632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.777 [2024-12-05 20:28:53.973659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:53.973703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.777 [2024-12-05 20:28:53.973718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:53.973769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.777 [2024-12-05 20:28:53.973783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:54.013918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.777 [2024-12-05 20:28:54.013945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:54.013990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.777 [2024-12-05 20:28:54.014006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:54.014058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.777 [2024-12-05 20:28:54.014073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.777 #52 NEW cov: 12426 ft: 14940 corp: 31/1927b lim: 100 exec/s: 52 rss: 75Mb L: 68/99 MS: 2 PersAutoDict-ChangeBit- DE: "\000\000\000\000"- 00:09:00.777 [2024-12-05 20:28:54.054054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.777 [2024-12-05 20:28:54.054081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:54.054117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.777 [2024-12-05 20:28:54.054132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:54.054187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:00.777 [2024-12-05 20:28:54.054201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:00.777 #53 NEW cov: 12426 ft: 14941 corp: 32/2000b lim: 100 exec/s: 53 rss: 75Mb L: 73/99 MS: 1 InsertByte- 00:09:00.777 [2024-12-05 20:28:54.114008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.777 [2024-12-05 20:28:54.114034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.777 [2024-12-05 20:28:54.114069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:00.777 [2024-12-05 20:28:54.114083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:00.777 #54 NEW cov: 12426 ft: 15195 corp: 33/2049b lim: 100 exec/s: 54 rss: 75Mb L: 49/99 MS: 1 EraseBytes- 00:09:00.777 [2024-12-05 20:28:54.174106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:00.777 [2024-12-05 20:28:54.174131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:00.777 #55 NEW cov: 12426 ft: 15213 corp: 34/2071b lim: 100 exec/s: 55 rss: 75Mb L: 22/99 MS: 1 ChangeBinInt- 00:09:01.037 [2024-12-05 20:28:54.214448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:01.037 [2024-12-05 20:28:54.214474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.037 [2024-12-05 20:28:54.214521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:01.037 [2024-12-05 20:28:54.214537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.037 [2024-12-05 20:28:54.214589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:01.037 [2024-12-05 20:28:54.214604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.037 #56 NEW cov: 12426 ft: 15218 corp: 35/2135b lim: 100 exec/s: 56 rss: 75Mb L: 64/99 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:09:01.037 [2024-12-05 20:28:54.254531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:01.037 [2024-12-05 20:28:54.254556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.037 [2024-12-05 20:28:54.254603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:01.037 [2024-12-05 20:28:54.254618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.037 [2024-12-05 20:28:54.254671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:01.037 [2024-12-05 20:28:54.254686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.037 #57 NEW cov: 12426 ft: 15231 corp: 36/2205b lim: 100 exec/s: 28 rss: 75Mb L: 70/99 MS: 1 CMP- DE: "\010\000"- 00:09:01.037 #57 DONE cov: 12426 ft: 15231 corp: 36/2205b lim: 100 exec/s: 28 rss: 75Mb 00:09:01.037 ###### Recommended dictionary. ###### 00:09:01.037 "\001x\002I\341\373&\222" # Uses: 3 00:09:01.037 "\000\000\000\000" # Uses: 2 00:09:01.037 "\010\000" # Uses: 0 00:09:01.037 ###### End of recommended dictionary. ###### 00:09:01.037 Done 57 runs in 2 second(s) 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:01.037 20:28:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:09:01.037 [2024-12-05 20:28:54.445398] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:01.037 [2024-12-05 20:28:54.445471] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846436 ] 00:09:01.295 [2024-12-05 20:28:54.660675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.295 [2024-12-05 20:28:54.698796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.553 [2024-12-05 20:28:54.758397] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.553 [2024-12-05 20:28:54.774648] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:09:01.553 INFO: Running with entropic power schedule (0xFF, 100). 00:09:01.553 INFO: Seed: 4097698104 00:09:01.553 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:09:01.553 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:09:01.553 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:01.553 INFO: A corpus is not provided, starting from an empty corpus 00:09:01.553 #2 INITED exec/s: 0 rss: 66Mb 00:09:01.553 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:01.553 This may also happen if the target rejected all inputs we tried so far 00:09:01.553 [2024-12-05 20:28:54.823390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:01.553 [2024-12-05 20:28:54.823422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.553 [2024-12-05 20:28:54.823455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:01.553 [2024-12-05 20:28:54.823471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.553 [2024-12-05 20:28:54.823519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:01.553 [2024-12-05 20:28:54.823539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.553 [2024-12-05 20:28:54.823587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:01.553 [2024-12-05 20:28:54.823601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.820 NEW_FUNC[1/716]: 0x45c358 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:09:01.820 NEW_FUNC[2/716]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:01.820 #3 NEW cov: 12177 ft: 12176 corp: 2/48b lim: 50 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:09:01.821 [2024-12-05 20:28:55.174410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17578661996817282035 len:62452 00:09:01.821 [2024-12-05 20:28:55.174452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.821 [2024-12-05 20:28:55.174524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17578661999652631539 len:62452 00:09:01.821 [2024-12-05 20:28:55.174541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.821 [2024-12-05 20:28:55.174591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17578661999652631539 len:62452 00:09:01.821 [2024-12-05 20:28:55.174608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.821 [2024-12-05 20:28:55.174662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17578661999652631539 len:62452 00:09:01.821 [2024-12-05 20:28:55.174678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:01.821 #8 NEW cov: 12290 ft: 12747 corp: 3/96b lim: 50 exec/s: 0 rss: 73Mb L: 48/48 MS: 5 CrossOver-ChangeBit-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:09:01.821 [2024-12-05 20:28:55.214433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17578660953140229107 len:12532 00:09:01.822 [2024-12-05 20:28:55.214466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:01.822 [2024-12-05 20:28:55.214517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17578661999652631539 len:62452 00:09:01.822 [2024-12-05 20:28:55.214535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:01.822 [2024-12-05 20:28:55.214590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17578661999652631539 len:62452 00:09:01.822 [2024-12-05 20:28:55.214606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:01.822 [2024-12-05 20:28:55.214656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17578661999652631539 len:62452 00:09:01.822 [2024-12-05 20:28:55.214673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.089 #14 NEW cov: 12296 ft: 13021 corp: 4/144b lim: 50 exec/s: 0 rss: 73Mb L: 48/48 MS: 1 ChangeBinInt- 00:09:02.089 [2024-12-05 20:28:55.274596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:65536 00:09:02.089 [2024-12-05 20:28:55.274631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.089 [2024-12-05 20:28:55.274700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.089 [2024-12-05 20:28:55.274716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.089 [2024-12-05 20:28:55.274772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.089 [2024-12-05 20:28:55.274787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.089 [2024-12-05 20:28:55.274842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.089 [2024-12-05 20:28:55.274859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.089 #15 NEW cov: 12381 ft: 13323 corp: 5/191b lim: 50 exec/s: 0 rss: 73Mb L: 47/48 MS: 1 ChangeBit- 00:09:02.089 [2024-12-05 20:28:55.334730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:02.089 [2024-12-05 20:28:55.334767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.089 [2024-12-05 20:28:55.334802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.089 [2024-12-05 20:28:55.334819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.089 [2024-12-05 20:28:55.334871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073700835327 len:65536 00:09:02.090 [2024-12-05 20:28:55.334887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.334942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.334957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.090 #21 NEW cov: 12381 ft: 13400 corp: 6/238b lim: 50 exec/s: 0 rss: 73Mb L: 47/48 MS: 1 ChangeByte- 00:09:02.090 [2024-12-05 20:28:55.374828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:64768 00:09:02.090 [2024-12-05 20:28:55.374869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.374924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.374940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.374992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.375008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.375060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.375075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.090 #22 NEW cov: 12381 ft: 13460 corp: 7/285b lim: 50 exec/s: 0 rss: 74Mb L: 47/48 MS: 1 ChangeBinInt- 00:09:02.090 [2024-12-05 20:28:55.435002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070672086015 len:65536 00:09:02.090 [2024-12-05 20:28:55.435034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.435076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.435093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.435146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.435180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.435232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17578660956177694707 len:12532 00:09:02.090 [2024-12-05 20:28:55.435247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.090 #23 NEW cov: 12381 ft: 13522 corp: 8/325b lim: 50 exec/s: 0 rss: 74Mb L: 40/48 MS: 1 CrossOver- 00:09:02.090 [2024-12-05 20:28:55.495118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.495146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.495196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.495213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.495264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.495281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.090 [2024-12-05 20:28:55.495334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.090 [2024-12-05 20:28:55.495349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.090 #24 NEW cov: 12381 ft: 13559 corp: 9/372b lim: 50 exec/s: 0 rss: 74Mb L: 47/48 MS: 1 ChangeBinInt- 00:09:02.348 [2024-12-05 20:28:55.535252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:65536 00:09:02.348 [2024-12-05 20:28:55.535280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.348 [2024-12-05 20:28:55.535332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.348 [2024-12-05 20:28:55.535348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.348 [2024-12-05 20:28:55.535401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.348 [2024-12-05 20:28:55.535418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.348 [2024-12-05 20:28:55.535470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.348 [2024-12-05 20:28:55.535487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.348 #25 NEW cov: 12381 ft: 13601 corp: 10/417b lim: 50 exec/s: 0 rss: 74Mb L: 45/48 MS: 1 EraseBytes- 00:09:02.348 [2024-12-05 20:28:55.575325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:65536 00:09:02.348 [2024-12-05 20:28:55.575352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.348 [2024-12-05 20:28:55.575400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.348 [2024-12-05 20:28:55.575417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.348 [2024-12-05 20:28:55.575467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.575483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.575535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.575549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.349 #26 NEW cov: 12381 ft: 13696 corp: 11/465b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:09:02.349 [2024-12-05 20:28:55.635603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.635632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.635696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.635713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.635768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.635784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.635837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.635852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.349 #27 NEW cov: 12381 ft: 13761 corp: 12/513b lim: 50 exec/s: 0 rss: 74Mb L: 48/48 MS: 1 CrossOver- 00:09:02.349 [2024-12-05 20:28:55.675490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:09:02.349 [2024-12-05 20:28:55.675517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.675553] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.675569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.675623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.675640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.349 #28 NEW cov: 12381 ft: 14053 corp: 13/547b lim: 50 exec/s: 0 rss: 74Mb L: 34/48 MS: 1 CrossOver- 00:09:02.349 [2024-12-05 20:28:55.715752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446534066988646335 len:65536 00:09:02.349 [2024-12-05 20:28:55.715782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.715820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.715836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.715888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.715918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.715975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.715991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.349 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:02.349 #29 NEW cov: 12404 ft: 14103 corp: 14/594b lim: 50 exec/s: 0 rss: 74Mb L: 47/48 MS: 1 ChangeByte- 00:09:02.349 [2024-12-05 20:28:55.755853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070672086015 len:65536 00:09:02.349 [2024-12-05 20:28:55.755882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.755931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65408 00:09:02.349 [2024-12-05 20:28:55.755948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.756001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.349 [2024-12-05 20:28:55.756016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.349 [2024-12-05 20:28:55.756072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17578660956177694707 len:12532 00:09:02.349 [2024-12-05 20:28:55.756088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.608 #30 NEW cov: 12404 ft: 14168 corp: 15/634b lim: 50 exec/s: 0 rss: 74Mb L: 40/48 MS: 1 ChangeBit- 00:09:02.608 [2024-12-05 20:28:55.816041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070672086015 len:65536 00:09:02.608 [2024-12-05 20:28:55.816068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.816121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65408 00:09:02.608 [2024-12-05 20:28:55.816138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.816191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.816208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.816263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:4294967040 len:1 00:09:02.608 [2024-12-05 20:28:55.816278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.608 #31 NEW cov: 12404 ft: 14177 corp: 16/682b lim: 50 exec/s: 31 rss: 74Mb L: 48/48 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:09:02.608 [2024-12-05 20:28:55.876170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.876198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.876247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.876264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.876319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.876335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.876388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.876404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.608 #32 NEW cov: 12404 ft: 14222 corp: 17/729b lim: 50 exec/s: 32 rss: 74Mb L: 47/48 MS: 1 ShuffleBytes- 00:09:02.608 [2024-12-05 20:28:55.936374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:64768 00:09:02.608 [2024-12-05 20:28:55.936401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.936450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.936466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.936518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:65536 00:09:02.608 [2024-12-05 20:28:55.936534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.936585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.936600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.608 #33 NEW cov: 12404 ft: 14244 corp: 18/776b lim: 50 exec/s: 33 rss: 74Mb L: 47/48 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:09:02.608 [2024-12-05 20:28:55.996532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:65536 00:09:02.608 [2024-12-05 20:28:55.996559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.996624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9765923850557587455 len:65536 00:09:02.608 [2024-12-05 20:28:55.996642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.996697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.996714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.608 [2024-12-05 20:28:55.996772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.608 [2024-12-05 20:28:55.996789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.608 #34 NEW cov: 12404 ft: 14250 corp: 19/824b lim: 50 exec/s: 34 rss: 74Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:09:02.608 [2024-12-05 20:28:56.036656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:64768 00:09:02.608 [2024-12-05 20:28:56.036684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.609 [2024-12-05 20:28:56.036735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551425 len:65536 00:09:02.609 [2024-12-05 20:28:56.036756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.609 [2024-12-05 20:28:56.036809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.609 [2024-12-05 20:28:56.036823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.609 [2024-12-05 20:28:56.036878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.609 [2024-12-05 20:28:56.036894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.867 #35 NEW cov: 12404 ft: 14273 corp: 20/871b lim: 50 exec/s: 35 rss: 74Mb L: 47/48 MS: 1 ChangeByte- 00:09:02.867 [2024-12-05 20:28:56.076721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.076750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.076805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65535 00:09:02.867 [2024-12-05 20:28:56.076820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.076874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.076890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.076943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.076960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.867 #36 NEW cov: 12404 ft: 14274 corp: 21/918b lim: 50 exec/s: 36 rss: 74Mb L: 47/48 MS: 1 ChangeBit- 00:09:02.867 [2024-12-05 20:28:56.136572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:09:02.867 [2024-12-05 20:28:56.136601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.867 #37 NEW cov: 12404 ft: 14681 corp: 22/936b lim: 50 exec/s: 37 rss: 74Mb L: 18/48 MS: 1 EraseBytes- 00:09:02.867 [2024-12-05 20:28:56.197061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446534066988646335 len:65536 00:09:02.867 [2024-12-05 20:28:56.197089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.197137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.197155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.197212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.197228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.197282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.197297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.867 #38 NEW cov: 12404 ft: 14696 corp: 23/983b lim: 50 exec/s: 38 rss: 74Mb L: 47/48 MS: 1 ShuffleBytes- 00:09:02.867 [2024-12-05 20:28:56.257232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446601141492907967 len:65536 00:09:02.867 [2024-12-05 20:28:56.257260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.257310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9765923850557587455 len:65536 00:09:02.867 [2024-12-05 20:28:56.257326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.257394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.257411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:02.867 [2024-12-05 20:28:56.257462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:02.867 [2024-12-05 20:28:56.257478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:02.867 #39 NEW cov: 12404 ft: 14716 corp: 24/1031b lim: 50 exec/s: 39 rss: 75Mb L: 48/48 MS: 1 CMP- DE: "~\000"- 00:09:03.127 [2024-12-05 20:28:56.317431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.317460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.317511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057594021150720 len:65536 00:09:03.127 [2024-12-05 20:28:56.317527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.317580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.317597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.317650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.317667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.127 #40 NEW cov: 12404 ft: 14721 corp: 25/1079b lim: 50 exec/s: 40 rss: 75Mb L: 48/48 MS: 1 ChangeBinInt- 00:09:03.127 [2024-12-05 20:28:56.377571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:64768 00:09:03.127 [2024-12-05 20:28:56.377599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.377648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.377667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.377719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.377735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.377795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.377812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.127 #41 NEW cov: 12404 ft: 14746 corp: 26/1126b lim: 50 exec/s: 41 rss: 75Mb L: 47/48 MS: 1 ShuffleBytes- 00:09:03.127 [2024-12-05 20:28:56.417767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070672086015 len:65536 00:09:03.127 [2024-12-05 20:28:56.417795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.417853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:281474976710655 len:65536 00:09:03.127 [2024-12-05 20:28:56.417868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.417923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073701163007 len:65536 00:09:03.127 [2024-12-05 20:28:56.417939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.417989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374686483966590975 len:1 00:09:03.127 [2024-12-05 20:28:56.418004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.418056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:17578660951882727667 len:12532 00:09:03.127 [2024-12-05 20:28:56.418072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:03.127 #42 NEW cov: 12404 ft: 14796 corp: 27/1176b lim: 50 exec/s: 42 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:09:03.127 [2024-12-05 20:28:56.477871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.477899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.477948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.477964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.478016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.478032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.478084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.478099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.127 #43 NEW cov: 12404 ft: 14800 corp: 28/1224b lim: 50 exec/s: 43 rss: 75Mb L: 48/50 MS: 1 InsertByte- 00:09:03.127 [2024-12-05 20:28:56.518079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17578661996817282035 len:62452 00:09:03.127 [2024-12-05 20:28:56.518112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.518154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17578661999652631539 len:2049 00:09:03.127 [2024-12-05 20:28:56.518170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.518221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17578661999652631539 len:62452 00:09:03.127 [2024-12-05 20:28:56.518253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.518305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17578661999652631539 len:62452 00:09:03.127 [2024-12-05 20:28:56.518321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.518375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:17578661999652631539 len:62452 00:09:03.127 [2024-12-05 20:28:56.518390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:03.127 #44 NEW cov: 12404 ft: 14821 corp: 29/1274b lim: 50 exec/s: 44 rss: 75Mb L: 50/50 MS: 1 CMP- DE: "\010\000"- 00:09:03.127 [2024-12-05 20:28:56.558080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744070672086015 len:65536 00:09:03.127 [2024-12-05 20:28:56.558108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.558155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551487 len:65536 00:09:03.127 [2024-12-05 20:28:56.558172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.558221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.127 [2024-12-05 20:28:56.558237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.127 [2024-12-05 20:28:56.558305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:17578660956177694707 len:12532 00:09:03.127 [2024-12-05 20:28:56.558322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.386 #45 NEW cov: 12404 ft: 14834 corp: 30/1314b lim: 50 exec/s: 45 rss: 75Mb L: 40/50 MS: 1 ChangeBit- 00:09:03.386 [2024-12-05 20:28:56.598200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446534066988646335 len:65536 00:09:03.386 [2024-12-05 20:28:56.598245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.598292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:03.386 [2024-12-05 20:28:56.598309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.598362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.386 [2024-12-05 20:28:56.598378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.598432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:64000 00:09:03.386 [2024-12-05 20:28:56.598452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.386 #46 NEW cov: 12404 ft: 14840 corp: 31/1361b lim: 50 exec/s: 46 rss: 75Mb L: 47/50 MS: 1 ChangeBinInt- 00:09:03.386 [2024-12-05 20:28:56.658378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:64768 00:09:03.386 [2024-12-05 20:28:56.658407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.658450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446743923385696255 len:65536 00:09:03.386 [2024-12-05 20:28:56.658466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.658519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.386 [2024-12-05 20:28:56.658536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.658588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:09:03.386 [2024-12-05 20:28:56.658604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.386 #47 NEW cov: 12404 ft: 14882 corp: 32/1409b lim: 50 exec/s: 47 rss: 75Mb L: 48/50 MS: 1 InsertByte- 00:09:03.386 [2024-12-05 20:28:56.698453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:65536 00:09:03.386 [2024-12-05 20:28:56.698483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.698529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:09:03.386 [2024-12-05 20:28:56.698546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.698600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.386 [2024-12-05 20:28:56.698615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.698669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18158513697557839871 len:65536 00:09:03.386 [2024-12-05 20:28:56.698686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.386 #48 NEW cov: 12404 ft: 14915 corp: 33/1457b lim: 50 exec/s: 48 rss: 75Mb L: 48/50 MS: 1 ChangeBinInt- 00:09:03.386 [2024-12-05 20:28:56.758260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:09:03.386 [2024-12-05 20:28:56.758287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.386 #49 NEW cov: 12404 ft: 14967 corp: 34/1475b lim: 50 exec/s: 49 rss: 75Mb L: 18/50 MS: 1 CopyPart- 00:09:03.386 [2024-12-05 20:28:56.818771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551551 len:64768 00:09:03.386 [2024-12-05 20:28:56.818798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.818849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551425 len:65536 00:09:03.386 [2024-12-05 20:28:56.818870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.818924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:09:03.386 [2024-12-05 20:28:56.818941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:03.386 [2024-12-05 20:28:56.818996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073692840959 len:65536 00:09:03.386 [2024-12-05 20:28:56.819013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:03.646 #50 NEW cov: 12404 ft: 14978 corp: 35/1522b lim: 50 exec/s: 25 rss: 75Mb L: 47/50 MS: 1 ChangeBinInt- 00:09:03.646 #50 DONE cov: 12404 ft: 14978 corp: 35/1522b lim: 50 exec/s: 25 rss: 75Mb 00:09:03.646 ###### Recommended dictionary. ###### 00:09:03.646 "\000\000\000\000\000\000\000\000" # Uses: 1 00:09:03.646 "~\000" # Uses: 0 00:09:03.646 "\010\000" # Uses: 0 00:09:03.646 ###### End of recommended dictionary. ###### 00:09:03.646 Done 50 runs in 2 second(s) 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:03.646 20:28:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:09:03.646 [2024-12-05 20:28:57.022261] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:03.646 [2024-12-05 20:28:57.022337] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846808 ] 00:09:03.906 [2024-12-05 20:28:57.232996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.906 [2024-12-05 20:28:57.271528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.906 [2024-12-05 20:28:57.330705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.165 [2024-12-05 20:28:57.346969] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:04.165 INFO: Running with entropic power schedule (0xFF, 100). 00:09:04.165 INFO: Seed: 2375717258 00:09:04.165 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:09:04.165 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:09:04.165 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:04.165 INFO: A corpus is not provided, starting from an empty corpus 00:09:04.165 #2 INITED exec/s: 0 rss: 67Mb 00:09:04.165 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:04.165 This may also happen if the target rejected all inputs we tried so far 00:09:04.165 [2024-12-05 20:28:57.396618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.165 [2024-12-05 20:28:57.396653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.165 [2024-12-05 20:28:57.396696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.165 [2024-12-05 20:28:57.396712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.165 [2024-12-05 20:28:57.396768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:04.165 [2024-12-05 20:28:57.396784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.165 [2024-12-05 20:28:57.396837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:04.165 [2024-12-05 20:28:57.396852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:04.424 NEW_FUNC[1/718]: 0x45df18 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:09:04.424 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:04.424 #12 NEW cov: 12226 ft: 12234 corp: 2/74b lim: 90 exec/s: 0 rss: 74Mb L: 73/73 MS: 5 CopyPart-ShuffleBytes-CrossOver-InsertByte-InsertRepeatedBytes- 00:09:04.424 [2024-12-05 20:28:57.737044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.424 [2024-12-05 20:28:57.737082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.424 #13 NEW cov: 12348 ft: 13763 corp: 3/99b lim: 90 exec/s: 0 rss: 74Mb L: 25/73 MS: 1 CrossOver- 00:09:04.424 [2024-12-05 20:28:57.787081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.424 [2024-12-05 20:28:57.787110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.424 #19 NEW cov: 12354 ft: 13981 corp: 4/124b lim: 90 exec/s: 0 rss: 74Mb L: 25/73 MS: 1 CopyPart- 00:09:04.424 [2024-12-05 20:28:57.847243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.424 [2024-12-05 20:28:57.847272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.682 #20 NEW cov: 12439 ft: 14226 corp: 5/149b lim: 90 exec/s: 0 rss: 74Mb L: 25/73 MS: 1 ChangeByte- 00:09:04.682 [2024-12-05 20:28:57.887321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.682 [2024-12-05 20:28:57.887349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.682 #21 NEW cov: 12439 ft: 14268 corp: 6/174b lim: 90 exec/s: 0 rss: 74Mb L: 25/73 MS: 1 ChangeByte- 00:09:04.682 [2024-12-05 20:28:57.947691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.682 [2024-12-05 20:28:57.947720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.682 [2024-12-05 20:28:57.947777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.682 [2024-12-05 20:28:57.947793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.682 #22 NEW cov: 12439 ft: 14612 corp: 7/210b lim: 90 exec/s: 0 rss: 74Mb L: 36/73 MS: 1 CopyPart- 00:09:04.682 [2024-12-05 20:28:57.987768] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.682 [2024-12-05 20:28:57.987797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.682 [2024-12-05 20:28:57.987856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.682 [2024-12-05 20:28:57.987872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.682 #23 NEW cov: 12439 ft: 14664 corp: 8/251b lim: 90 exec/s: 0 rss: 74Mb L: 41/73 MS: 1 EraseBytes- 00:09:04.682 [2024-12-05 20:28:58.048077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.682 [2024-12-05 20:28:58.048105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.682 [2024-12-05 20:28:58.048162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.682 [2024-12-05 20:28:58.048179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.682 [2024-12-05 20:28:58.048234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:04.682 [2024-12-05 20:28:58.048251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.682 #24 NEW cov: 12439 ft: 14959 corp: 9/307b lim: 90 exec/s: 0 rss: 74Mb L: 56/73 MS: 1 InsertRepeatedBytes- 00:09:04.682 [2024-12-05 20:28:58.108400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.682 [2024-12-05 20:28:58.108429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.682 [2024-12-05 20:28:58.108492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.682 [2024-12-05 20:28:58.108508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.682 [2024-12-05 20:28:58.108565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:04.682 [2024-12-05 20:28:58.108579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.682 [2024-12-05 20:28:58.108633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:04.682 [2024-12-05 20:28:58.108650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:04.940 #25 NEW cov: 12439 ft: 15011 corp: 10/388b lim: 90 exec/s: 0 rss: 74Mb L: 81/81 MS: 1 CrossOver- 00:09:04.940 [2024-12-05 20:28:58.148021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.940 [2024-12-05 20:28:58.148049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.940 #26 NEW cov: 12439 ft: 15083 corp: 11/413b lim: 90 exec/s: 0 rss: 74Mb L: 25/81 MS: 1 ChangeByte- 00:09:04.940 [2024-12-05 20:28:58.188418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.940 [2024-12-05 20:28:58.188449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.188504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.940 [2024-12-05 20:28:58.188520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.188576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:04.940 [2024-12-05 20:28:58.188592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.940 #27 NEW cov: 12439 ft: 15138 corp: 12/469b lim: 90 exec/s: 0 rss: 74Mb L: 56/81 MS: 1 CopyPart- 00:09:04.940 [2024-12-05 20:28:58.228253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.940 [2024-12-05 20:28:58.228281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.940 #28 NEW cov: 12439 ft: 15205 corp: 13/494b lim: 90 exec/s: 0 rss: 74Mb L: 25/81 MS: 1 ChangeByte- 00:09:04.940 [2024-12-05 20:28:58.268799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.940 [2024-12-05 20:28:58.268826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.268893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.940 [2024-12-05 20:28:58.268911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.268966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:04.940 [2024-12-05 20:28:58.268983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.269039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:04.940 [2024-12-05 20:28:58.269055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:04.940 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:04.940 #29 NEW cov: 12462 ft: 15250 corp: 14/577b lim: 90 exec/s: 0 rss: 74Mb L: 83/83 MS: 1 CopyPart- 00:09:04.940 [2024-12-05 20:28:58.308908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.940 [2024-12-05 20:28:58.308936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.309002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.940 [2024-12-05 20:28:58.309019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.309074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:04.940 [2024-12-05 20:28:58.309088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.309143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:04.940 [2024-12-05 20:28:58.309160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:04.940 #30 NEW cov: 12462 ft: 15263 corp: 15/660b lim: 90 exec/s: 0 rss: 74Mb L: 83/83 MS: 1 ChangeBinInt- 00:09:04.940 [2024-12-05 20:28:58.368903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:04.940 [2024-12-05 20:28:58.368933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.368987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:04.940 [2024-12-05 20:28:58.369003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:04.940 [2024-12-05 20:28:58.369058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:04.940 [2024-12-05 20:28:58.369075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.199 #31 NEW cov: 12462 ft: 15317 corp: 16/715b lim: 90 exec/s: 31 rss: 74Mb L: 55/83 MS: 1 CrossOver- 00:09:05.199 [2024-12-05 20:28:58.409020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.199 [2024-12-05 20:28:58.409046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.409100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.199 [2024-12-05 20:28:58.409116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.409174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.199 [2024-12-05 20:28:58.409189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.199 #32 NEW cov: 12462 ft: 15348 corp: 17/771b lim: 90 exec/s: 32 rss: 74Mb L: 56/83 MS: 1 ChangeBit- 00:09:05.199 [2024-12-05 20:28:58.469340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.199 [2024-12-05 20:28:58.469367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.469430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.199 [2024-12-05 20:28:58.469446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.469502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.199 [2024-12-05 20:28:58.469518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.469573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.199 [2024-12-05 20:28:58.469588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.199 #38 NEW cov: 12462 ft: 15359 corp: 18/848b lim: 90 exec/s: 38 rss: 74Mb L: 77/83 MS: 1 CrossOver- 00:09:05.199 [2024-12-05 20:28:58.529153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.199 [2024-12-05 20:28:58.529180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.529236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.199 [2024-12-05 20:28:58.529252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.199 #44 NEW cov: 12462 ft: 15367 corp: 19/892b lim: 90 exec/s: 44 rss: 74Mb L: 44/83 MS: 1 EraseBytes- 00:09:05.199 [2024-12-05 20:28:58.569611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.199 [2024-12-05 20:28:58.569638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.569682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.199 [2024-12-05 20:28:58.569697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.569753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.199 [2024-12-05 20:28:58.569784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.569838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.199 [2024-12-05 20:28:58.569854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.199 #45 NEW cov: 12462 ft: 15377 corp: 20/975b lim: 90 exec/s: 45 rss: 75Mb L: 83/83 MS: 1 ChangeBinInt- 00:09:05.199 [2024-12-05 20:28:58.629611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.199 [2024-12-05 20:28:58.629638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.629705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.199 [2024-12-05 20:28:58.629722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.199 [2024-12-05 20:28:58.629782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.199 [2024-12-05 20:28:58.629797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.458 #51 NEW cov: 12462 ft: 15389 corp: 21/1031b lim: 90 exec/s: 51 rss: 75Mb L: 56/83 MS: 1 ChangeBit- 00:09:05.458 [2024-12-05 20:28:58.669421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.458 [2024-12-05 20:28:58.669448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.458 #52 NEW cov: 12462 ft: 15425 corp: 22/1056b lim: 90 exec/s: 52 rss: 75Mb L: 25/83 MS: 1 ChangeBit- 00:09:05.458 [2024-12-05 20:28:58.729742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.458 [2024-12-05 20:28:58.729775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.729815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.458 [2024-12-05 20:28:58.729831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.458 #53 NEW cov: 12462 ft: 15431 corp: 23/1101b lim: 90 exec/s: 53 rss: 75Mb L: 45/83 MS: 1 EraseBytes- 00:09:05.458 [2024-12-05 20:28:58.790249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.458 [2024-12-05 20:28:58.790278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.790340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.458 [2024-12-05 20:28:58.790356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.790413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.458 [2024-12-05 20:28:58.790429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.790484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.458 [2024-12-05 20:28:58.790503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.458 #54 NEW cov: 12462 ft: 15455 corp: 24/1184b lim: 90 exec/s: 54 rss: 75Mb L: 83/83 MS: 1 ChangeBit- 00:09:05.458 [2024-12-05 20:28:58.830178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.458 [2024-12-05 20:28:58.830206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.830254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.458 [2024-12-05 20:28:58.830270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.830325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.458 [2024-12-05 20:28:58.830341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.458 #55 NEW cov: 12462 ft: 15475 corp: 25/1240b lim: 90 exec/s: 55 rss: 75Mb L: 56/83 MS: 1 InsertRepeatedBytes- 00:09:05.458 [2024-12-05 20:28:58.890377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.458 [2024-12-05 20:28:58.890404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.890442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.458 [2024-12-05 20:28:58.890458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.458 [2024-12-05 20:28:58.890513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.458 [2024-12-05 20:28:58.890530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.717 #56 NEW cov: 12462 ft: 15485 corp: 26/1311b lim: 90 exec/s: 56 rss: 75Mb L: 71/83 MS: 1 CopyPart- 00:09:05.717 [2024-12-05 20:28:58.950683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.717 [2024-12-05 20:28:58.950710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:58.950785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.717 [2024-12-05 20:28:58.950802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:58.950868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.717 [2024-12-05 20:28:58.950882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:58.950936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.717 [2024-12-05 20:28:58.950951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.717 #57 NEW cov: 12462 ft: 15527 corp: 27/1392b lim: 90 exec/s: 57 rss: 75Mb L: 81/83 MS: 1 ShuffleBytes- 00:09:05.717 [2024-12-05 20:28:59.010846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.717 [2024-12-05 20:28:59.010874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.010942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.717 [2024-12-05 20:28:59.010959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.011017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.717 [2024-12-05 20:28:59.011032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.011087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.717 [2024-12-05 20:28:59.011103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.717 #58 NEW cov: 12462 ft: 15536 corp: 28/1465b lim: 90 exec/s: 58 rss: 75Mb L: 73/83 MS: 1 InsertRepeatedBytes- 00:09:05.717 [2024-12-05 20:28:59.050782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.717 [2024-12-05 20:28:59.050809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.050871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.717 [2024-12-05 20:28:59.050888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.050943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.717 [2024-12-05 20:28:59.050958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.717 #59 NEW cov: 12462 ft: 15611 corp: 29/1520b lim: 90 exec/s: 59 rss: 75Mb L: 55/83 MS: 1 ChangeByte- 00:09:05.717 [2024-12-05 20:28:59.111113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.717 [2024-12-05 20:28:59.111142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.111207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.717 [2024-12-05 20:28:59.111224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.111281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.717 [2024-12-05 20:28:59.111296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.717 [2024-12-05 20:28:59.111352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.717 [2024-12-05 20:28:59.111369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.717 #60 NEW cov: 12462 ft: 15694 corp: 30/1603b lim: 90 exec/s: 60 rss: 75Mb L: 83/83 MS: 1 ShuffleBytes- 00:09:05.977 [2024-12-05 20:28:59.170997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.977 [2024-12-05 20:28:59.171024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.171089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.977 [2024-12-05 20:28:59.171106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.977 #61 NEW cov: 12462 ft: 15713 corp: 31/1639b lim: 90 exec/s: 61 rss: 75Mb L: 36/83 MS: 1 CMP- DE: "\001\000\000\000"- 00:09:05.977 [2024-12-05 20:28:59.211105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.977 [2024-12-05 20:28:59.211132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.211199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.977 [2024-12-05 20:28:59.211217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.977 #62 NEW cov: 12462 ft: 15717 corp: 32/1676b lim: 90 exec/s: 62 rss: 75Mb L: 37/83 MS: 1 InsertByte- 00:09:05.977 [2024-12-05 20:28:59.251509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.977 [2024-12-05 20:28:59.251538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.251606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.977 [2024-12-05 20:28:59.251623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.251678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.977 [2024-12-05 20:28:59.251692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.251752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.977 [2024-12-05 20:28:59.251770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.977 #63 NEW cov: 12462 ft: 15721 corp: 33/1760b lim: 90 exec/s: 63 rss: 75Mb L: 84/84 MS: 1 InsertByte- 00:09:05.977 [2024-12-05 20:28:59.311612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.977 [2024-12-05 20:28:59.311641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.311690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.977 [2024-12-05 20:28:59.311706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.311778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.977 [2024-12-05 20:28:59.311793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.311850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.977 [2024-12-05 20:28:59.311866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.977 #64 NEW cov: 12462 ft: 15751 corp: 34/1833b lim: 90 exec/s: 64 rss: 75Mb L: 73/84 MS: 1 CrossOver- 00:09:05.977 [2024-12-05 20:28:59.351697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:05.977 [2024-12-05 20:28:59.351725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.351795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:05.977 [2024-12-05 20:28:59.351812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.351866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:05.977 [2024-12-05 20:28:59.351883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:05.977 [2024-12-05 20:28:59.351935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:09:05.977 [2024-12-05 20:28:59.351951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:05.977 #65 NEW cov: 12462 ft: 15755 corp: 35/1918b lim: 90 exec/s: 32 rss: 75Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:09:05.977 #65 DONE cov: 12462 ft: 15755 corp: 35/1918b lim: 90 exec/s: 32 rss: 75Mb 00:09:05.977 ###### Recommended dictionary. ###### 00:09:05.977 "\001\000\000\000" # Uses: 0 00:09:05.977 ###### End of recommended dictionary. ###### 00:09:05.977 Done 65 runs in 2 second(s) 00:09:06.236 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:09:06.236 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:06.236 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:06.236 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:09:06.236 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:09:06.236 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:06.236 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:06.237 20:28:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:09:06.237 [2024-12-05 20:28:59.560059] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:06.237 [2024-12-05 20:28:59.560136] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847139 ] 00:09:06.496 [2024-12-05 20:28:59.772141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.496 [2024-12-05 20:28:59.809562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.496 [2024-12-05 20:28:59.868561] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.496 [2024-12-05 20:28:59.884816] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:09:06.496 INFO: Running with entropic power schedule (0xFF, 100). 00:09:06.496 INFO: Seed: 619746432 00:09:06.496 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:09:06.496 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:09:06.496 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:06.496 INFO: A corpus is not provided, starting from an empty corpus 00:09:06.496 #2 INITED exec/s: 0 rss: 67Mb 00:09:06.496 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:06.496 This may also happen if the target rejected all inputs we tried so far 00:09:06.755 [2024-12-05 20:28:59.940152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:06.755 [2024-12-05 20:28:59.940188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.013 NEW_FUNC[1/718]: 0x461148 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:09:07.013 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:07.014 #7 NEW cov: 12210 ft: 12196 corp: 2/11b lim: 50 exec/s: 0 rss: 74Mb L: 10/10 MS: 5 ChangeByte-CrossOver-ChangeByte-CopyPart-InsertRepeatedBytes- 00:09:07.014 [2024-12-05 20:29:00.280978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.014 [2024-12-05 20:29:00.281029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.014 #8 NEW cov: 12323 ft: 12643 corp: 3/26b lim: 50 exec/s: 0 rss: 74Mb L: 15/15 MS: 1 CopyPart- 00:09:07.014 [2024-12-05 20:29:00.341312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.014 [2024-12-05 20:29:00.341339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.014 [2024-12-05 20:29:00.341392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.014 [2024-12-05 20:29:00.341409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.014 [2024-12-05 20:29:00.341459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.014 [2024-12-05 20:29:00.341474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.014 #11 NEW cov: 12329 ft: 13764 corp: 4/65b lim: 50 exec/s: 0 rss: 74Mb L: 39/39 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:09:07.014 [2024-12-05 20:29:00.381109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.014 [2024-12-05 20:29:00.381135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.014 #12 NEW cov: 12414 ft: 14003 corp: 5/80b lim: 50 exec/s: 0 rss: 74Mb L: 15/39 MS: 1 ChangeBinInt- 00:09:07.014 [2024-12-05 20:29:00.441245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.014 [2024-12-05 20:29:00.441273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.273 #13 NEW cov: 12414 ft: 14110 corp: 6/95b lim: 50 exec/s: 0 rss: 74Mb L: 15/39 MS: 1 ChangeBinInt- 00:09:07.273 [2024-12-05 20:29:00.501705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.273 [2024-12-05 20:29:00.501732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.273 [2024-12-05 20:29:00.501772] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.273 [2024-12-05 20:29:00.501787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.273 [2024-12-05 20:29:00.501840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.273 [2024-12-05 20:29:00.501855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.273 #18 NEW cov: 12414 ft: 14264 corp: 7/134b lim: 50 exec/s: 0 rss: 74Mb L: 39/39 MS: 5 ChangeBit-CopyPart-EraseBytes-CopyPart-InsertRepeatedBytes- 00:09:07.273 [2024-12-05 20:29:00.541522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.273 [2024-12-05 20:29:00.541553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.273 #19 NEW cov: 12414 ft: 14332 corp: 8/151b lim: 50 exec/s: 0 rss: 74Mb L: 17/39 MS: 1 InsertRepeatedBytes- 00:09:07.273 [2024-12-05 20:29:00.582079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.273 [2024-12-05 20:29:00.582106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.273 [2024-12-05 20:29:00.582158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.273 [2024-12-05 20:29:00.582174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.273 [2024-12-05 20:29:00.582226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.273 [2024-12-05 20:29:00.582242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.273 [2024-12-05 20:29:00.582294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:07.273 [2024-12-05 20:29:00.582310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.273 #20 NEW cov: 12414 ft: 14677 corp: 9/195b lim: 50 exec/s: 0 rss: 75Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:09:07.273 [2024-12-05 20:29:00.642122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.273 [2024-12-05 20:29:00.642150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.273 [2024-12-05 20:29:00.642205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.273 [2024-12-05 20:29:00.642223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.273 [2024-12-05 20:29:00.642276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.273 [2024-12-05 20:29:00.642292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.273 #21 NEW cov: 12414 ft: 14710 corp: 10/234b lim: 50 exec/s: 0 rss: 75Mb L: 39/44 MS: 1 ChangeBit- 00:09:07.273 [2024-12-05 20:29:00.701996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.273 [2024-12-05 20:29:00.702023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.532 #23 NEW cov: 12414 ft: 14739 corp: 11/246b lim: 50 exec/s: 0 rss: 75Mb L: 12/44 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:07.532 [2024-12-05 20:29:00.742516] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.532 [2024-12-05 20:29:00.742544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.532 [2024-12-05 20:29:00.742610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.532 [2024-12-05 20:29:00.742626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.532 [2024-12-05 20:29:00.742676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.532 [2024-12-05 20:29:00.742691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.532 [2024-12-05 20:29:00.742742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:07.532 [2024-12-05 20:29:00.742765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.532 #24 NEW cov: 12414 ft: 14787 corp: 12/294b lim: 50 exec/s: 0 rss: 75Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:09:07.532 [2024-12-05 20:29:00.802256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.532 [2024-12-05 20:29:00.802285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.532 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:07.532 #25 NEW cov: 12437 ft: 14916 corp: 13/306b lim: 50 exec/s: 0 rss: 75Mb L: 12/48 MS: 1 ChangeBit- 00:09:07.532 [2024-12-05 20:29:00.862572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.532 [2024-12-05 20:29:00.862601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.532 [2024-12-05 20:29:00.862651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.532 [2024-12-05 20:29:00.862666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.532 #26 NEW cov: 12437 ft: 15180 corp: 14/328b lim: 50 exec/s: 0 rss: 75Mb L: 22/48 MS: 1 EraseBytes- 00:09:07.532 [2024-12-05 20:29:00.922863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.532 [2024-12-05 20:29:00.922891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.532 [2024-12-05 20:29:00.922937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.532 [2024-12-05 20:29:00.922954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.532 [2024-12-05 20:29:00.923007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.532 [2024-12-05 20:29:00.923021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.532 #27 NEW cov: 12437 ft: 15191 corp: 15/367b lim: 50 exec/s: 27 rss: 75Mb L: 39/48 MS: 1 ShuffleBytes- 00:09:07.532 [2024-12-05 20:29:00.962986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.532 [2024-12-05 20:29:00.963013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.532 [2024-12-05 20:29:00.963050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.532 [2024-12-05 20:29:00.963066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.533 [2024-12-05 20:29:00.963118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.533 [2024-12-05 20:29:00.963134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.791 #28 NEW cov: 12437 ft: 15232 corp: 16/401b lim: 50 exec/s: 28 rss: 75Mb L: 34/48 MS: 1 InsertRepeatedBytes- 00:09:07.791 [2024-12-05 20:29:01.022815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.791 [2024-12-05 20:29:01.022842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.791 #29 NEW cov: 12437 ft: 15265 corp: 17/419b lim: 50 exec/s: 29 rss: 75Mb L: 18/48 MS: 1 InsertByte- 00:09:07.791 [2024-12-05 20:29:01.063390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.791 [2024-12-05 20:29:01.063418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.063463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.791 [2024-12-05 20:29:01.063482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.063548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.791 [2024-12-05 20:29:01.063564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.063616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:07.791 [2024-12-05 20:29:01.063632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.791 #30 NEW cov: 12437 ft: 15312 corp: 18/463b lim: 50 exec/s: 30 rss: 75Mb L: 44/48 MS: 1 CopyPart- 00:09:07.791 [2024-12-05 20:29:01.103469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.791 [2024-12-05 20:29:01.103496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.103561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.791 [2024-12-05 20:29:01.103577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.103630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.791 [2024-12-05 20:29:01.103645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.103700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:07.791 [2024-12-05 20:29:01.103716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.791 #31 NEW cov: 12437 ft: 15358 corp: 19/511b lim: 50 exec/s: 31 rss: 75Mb L: 48/48 MS: 1 ChangeByte- 00:09:07.791 [2024-12-05 20:29:01.163788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.791 [2024-12-05 20:29:01.163815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.163889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:07.791 [2024-12-05 20:29:01.163904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:07.791 [2024-12-05 20:29:01.163959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:07.791 [2024-12-05 20:29:01.163973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:07.792 [2024-12-05 20:29:01.164025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:07.792 [2024-12-05 20:29:01.164041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:07.792 [2024-12-05 20:29:01.164093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:09:07.792 [2024-12-05 20:29:01.164108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:07.792 #32 NEW cov: 12437 ft: 15427 corp: 20/561b lim: 50 exec/s: 32 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:09:07.792 [2024-12-05 20:29:01.203350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:07.792 [2024-12-05 20:29:01.203378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:07.792 #33 NEW cov: 12437 ft: 15496 corp: 21/571b lim: 50 exec/s: 33 rss: 75Mb L: 10/50 MS: 1 ChangeByte- 00:09:08.050 [2024-12-05 20:29:01.243723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.050 [2024-12-05 20:29:01.243754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.050 [2024-12-05 20:29:01.243790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.050 [2024-12-05 20:29:01.243806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.050 [2024-12-05 20:29:01.243854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.050 [2024-12-05 20:29:01.243871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.050 #34 NEW cov: 12437 ft: 15509 corp: 22/610b lim: 50 exec/s: 34 rss: 75Mb L: 39/50 MS: 1 CrossOver- 00:09:08.050 [2024-12-05 20:29:01.283833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.050 [2024-12-05 20:29:01.283859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.050 [2024-12-05 20:29:01.283922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.050 [2024-12-05 20:29:01.283939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.050 [2024-12-05 20:29:01.283990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.050 [2024-12-05 20:29:01.284004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.050 #35 NEW cov: 12437 ft: 15526 corp: 23/649b lim: 50 exec/s: 35 rss: 75Mb L: 39/50 MS: 1 ChangeBinInt- 00:09:08.050 [2024-12-05 20:29:01.343724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.050 [2024-12-05 20:29:01.343755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.050 #36 NEW cov: 12437 ft: 15537 corp: 24/666b lim: 50 exec/s: 36 rss: 75Mb L: 17/50 MS: 1 ChangeBit- 00:09:08.050 [2024-12-05 20:29:01.384218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.050 [2024-12-05 20:29:01.384243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.050 [2024-12-05 20:29:01.384310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.050 [2024-12-05 20:29:01.384326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.050 [2024-12-05 20:29:01.384376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.050 [2024-12-05 20:29:01.384391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.050 [2024-12-05 20:29:01.384442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:08.050 [2024-12-05 20:29:01.384457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.050 #37 NEW cov: 12437 ft: 15547 corp: 25/710b lim: 50 exec/s: 37 rss: 75Mb L: 44/50 MS: 1 ShuffleBytes- 00:09:08.050 [2024-12-05 20:29:01.443944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.050 [2024-12-05 20:29:01.443971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.050 #41 NEW cov: 12437 ft: 15580 corp: 26/720b lim: 50 exec/s: 41 rss: 75Mb L: 10/50 MS: 4 EraseBytes-CopyPart-ChangeBit-InsertByte- 00:09:08.309 [2024-12-05 20:29:01.504567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.309 [2024-12-05 20:29:01.504595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.504642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.309 [2024-12-05 20:29:01.504657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.504710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.309 [2024-12-05 20:29:01.504726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.504796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:08.309 [2024-12-05 20:29:01.504814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.309 #42 NEW cov: 12437 ft: 15588 corp: 27/764b lim: 50 exec/s: 42 rss: 75Mb L: 44/50 MS: 1 ShuffleBytes- 00:09:08.309 [2024-12-05 20:29:01.564592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.309 [2024-12-05 20:29:01.564619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.564664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.309 [2024-12-05 20:29:01.564680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.564730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.309 [2024-12-05 20:29:01.564748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.309 #43 NEW cov: 12437 ft: 15600 corp: 28/803b lim: 50 exec/s: 43 rss: 75Mb L: 39/50 MS: 1 ChangeBinInt- 00:09:08.309 [2024-12-05 20:29:01.604860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.309 [2024-12-05 20:29:01.604887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.604939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.309 [2024-12-05 20:29:01.604955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.605006] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.309 [2024-12-05 20:29:01.605037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.605088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:08.309 [2024-12-05 20:29:01.605104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.309 #44 NEW cov: 12437 ft: 15604 corp: 29/847b lim: 50 exec/s: 44 rss: 76Mb L: 44/50 MS: 1 ChangeBinInt- 00:09:08.309 [2024-12-05 20:29:01.665010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.309 [2024-12-05 20:29:01.665037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.665085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.309 [2024-12-05 20:29:01.665101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.665156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.309 [2024-12-05 20:29:01.665172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.309 [2024-12-05 20:29:01.665224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:08.309 [2024-12-05 20:29:01.665239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.309 #45 NEW cov: 12437 ft: 15608 corp: 30/887b lim: 50 exec/s: 45 rss: 76Mb L: 40/50 MS: 1 InsertByte- 00:09:08.309 [2024-12-05 20:29:01.704692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.309 [2024-12-05 20:29:01.704720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.569 #46 NEW cov: 12437 ft: 15618 corp: 31/898b lim: 50 exec/s: 46 rss: 76Mb L: 11/50 MS: 1 InsertByte- 00:09:08.569 [2024-12-05 20:29:01.764891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.569 [2024-12-05 20:29:01.764917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.569 #47 NEW cov: 12437 ft: 15697 corp: 32/909b lim: 50 exec/s: 47 rss: 76Mb L: 11/50 MS: 1 ChangeByte- 00:09:08.569 [2024-12-05 20:29:01.825467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.569 [2024-12-05 20:29:01.825495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.569 [2024-12-05 20:29:01.825557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.569 [2024-12-05 20:29:01.825574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.569 [2024-12-05 20:29:01.825627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.569 [2024-12-05 20:29:01.825641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.569 [2024-12-05 20:29:01.825694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:08.569 [2024-12-05 20:29:01.825709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:08.569 #48 NEW cov: 12437 ft: 15711 corp: 33/953b lim: 50 exec/s: 48 rss: 76Mb L: 44/50 MS: 1 InsertRepeatedBytes- 00:09:08.569 [2024-12-05 20:29:01.885476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:08.569 [2024-12-05 20:29:01.885502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:08.569 [2024-12-05 20:29:01.885556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:08.569 [2024-12-05 20:29:01.885574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:08.569 [2024-12-05 20:29:01.885624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:08.569 [2024-12-05 20:29:01.885639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:08.569 #49 NEW cov: 12437 ft: 15723 corp: 34/992b lim: 50 exec/s: 24 rss: 76Mb L: 39/50 MS: 1 ChangeBit- 00:09:08.569 #49 DONE cov: 12437 ft: 15723 corp: 34/992b lim: 50 exec/s: 24 rss: 76Mb 00:09:08.569 Done 49 runs in 2 second(s) 00:09:08.827 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:08.828 20:29:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:09:08.828 [2024-12-05 20:29:02.094652] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:08.828 [2024-12-05 20:29:02.094728] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847397 ] 00:09:09.086 [2024-12-05 20:29:02.300426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.086 [2024-12-05 20:29:02.338614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.086 [2024-12-05 20:29:02.397927] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.086 [2024-12-05 20:29:02.414173] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:09:09.086 INFO: Running with entropic power schedule (0xFF, 100). 00:09:09.086 INFO: Seed: 3148748501 00:09:09.086 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:09:09.086 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:09:09.086 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:09.086 INFO: A corpus is not provided, starting from an empty corpus 00:09:09.086 #2 INITED exec/s: 0 rss: 67Mb 00:09:09.086 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:09.086 This may also happen if the target rejected all inputs we tried so far 00:09:09.086 [2024-12-05 20:29:02.469820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.086 [2024-12-05 20:29:02.469853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.086 [2024-12-05 20:29:02.469929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.086 [2024-12-05 20:29:02.469945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.086 [2024-12-05 20:29:02.470007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:09.086 [2024-12-05 20:29:02.470024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.654 NEW_FUNC[1/718]: 0x463418 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:09:09.654 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:09.654 #17 NEW cov: 12223 ft: 12235 corp: 2/55b lim: 85 exec/s: 0 rss: 74Mb L: 54/54 MS: 5 CopyPart-ShuffleBytes-ShuffleBytes-CMP-InsertRepeatedBytes- DE: "\272\342\317\331N\002x\000"- 00:09:09.654 [2024-12-05 20:29:02.810401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.654 [2024-12-05 20:29:02.810445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.654 #26 NEW cov: 12349 ft: 13684 corp: 3/76b lim: 85 exec/s: 0 rss: 74Mb L: 21/54 MS: 4 CMP-CMP-ShuffleBytes-CrossOver- DE: "\003\000\000\000"-"\001\000\000\030"- 00:09:09.654 [2024-12-05 20:29:02.860761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.654 [2024-12-05 20:29:02.860791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:02.860847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.654 [2024-12-05 20:29:02.860863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:02.860926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:09.654 [2024-12-05 20:29:02.860942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.654 #32 NEW cov: 12355 ft: 13838 corp: 4/135b lim: 85 exec/s: 0 rss: 74Mb L: 59/59 MS: 1 InsertRepeatedBytes- 00:09:09.654 [2024-12-05 20:29:02.920909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.654 [2024-12-05 20:29:02.920936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:02.920981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.654 [2024-12-05 20:29:02.920998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:02.921051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:09.654 [2024-12-05 20:29:02.921067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.654 #33 NEW cov: 12440 ft: 14074 corp: 5/194b lim: 85 exec/s: 0 rss: 74Mb L: 59/59 MS: 1 ChangeBinInt- 00:09:09.654 [2024-12-05 20:29:02.981046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.654 [2024-12-05 20:29:02.981073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:02.981136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.654 [2024-12-05 20:29:02.981152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:02.981209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:09.654 [2024-12-05 20:29:02.981225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.654 #34 NEW cov: 12440 ft: 14141 corp: 6/248b lim: 85 exec/s: 0 rss: 74Mb L: 54/59 MS: 1 CrossOver- 00:09:09.654 [2024-12-05 20:29:03.041215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.654 [2024-12-05 20:29:03.041243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:03.041301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.654 [2024-12-05 20:29:03.041317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.654 [2024-12-05 20:29:03.041372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:09.654 [2024-12-05 20:29:03.041388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.654 #40 NEW cov: 12440 ft: 14269 corp: 7/307b lim: 85 exec/s: 0 rss: 74Mb L: 59/59 MS: 1 ChangeBinInt- 00:09:09.654 [2024-12-05 20:29:03.081017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.654 [2024-12-05 20:29:03.081043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.913 #43 NEW cov: 12440 ft: 14404 corp: 8/334b lim: 85 exec/s: 0 rss: 74Mb L: 27/59 MS: 3 CopyPart-ChangeBit-CrossOver- 00:09:09.913 [2024-12-05 20:29:03.121400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.913 [2024-12-05 20:29:03.121428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.913 [2024-12-05 20:29:03.121488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.913 [2024-12-05 20:29:03.121504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.913 [2024-12-05 20:29:03.121560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:09.913 [2024-12-05 20:29:03.121575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.913 #44 NEW cov: 12440 ft: 14487 corp: 9/400b lim: 85 exec/s: 0 rss: 74Mb L: 66/66 MS: 1 InsertRepeatedBytes- 00:09:09.913 [2024-12-05 20:29:03.161283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.913 [2024-12-05 20:29:03.161310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.913 #47 NEW cov: 12440 ft: 14515 corp: 10/424b lim: 85 exec/s: 0 rss: 74Mb L: 24/66 MS: 3 ChangeBit-ChangeBit-CrossOver- 00:09:09.913 [2024-12-05 20:29:03.201357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.913 [2024-12-05 20:29:03.201384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.913 #48 NEW cov: 12440 ft: 14575 corp: 11/448b lim: 85 exec/s: 0 rss: 74Mb L: 24/66 MS: 1 ShuffleBytes- 00:09:09.913 [2024-12-05 20:29:03.261868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.913 [2024-12-05 20:29:03.261896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.913 [2024-12-05 20:29:03.261951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.913 [2024-12-05 20:29:03.261967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.913 [2024-12-05 20:29:03.262021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:09.913 [2024-12-05 20:29:03.262041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:09.913 #49 NEW cov: 12440 ft: 14624 corp: 12/502b lim: 85 exec/s: 0 rss: 75Mb L: 54/66 MS: 1 ChangeBinInt- 00:09:09.913 [2024-12-05 20:29:03.301771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:09.913 [2024-12-05 20:29:03.301798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:09.913 [2024-12-05 20:29:03.301854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:09.913 [2024-12-05 20:29:03.301871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:09.913 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:09.913 #50 NEW cov: 12463 ft: 14918 corp: 13/536b lim: 85 exec/s: 0 rss: 75Mb L: 34/66 MS: 1 EraseBytes- 00:09:10.171 [2024-12-05 20:29:03.362121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.171 [2024-12-05 20:29:03.362147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.171 [2024-12-05 20:29:03.362211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.171 [2024-12-05 20:29:03.362228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.171 [2024-12-05 20:29:03.362282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.171 [2024-12-05 20:29:03.362298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.171 #51 NEW cov: 12463 ft: 15024 corp: 14/590b lim: 85 exec/s: 0 rss: 75Mb L: 54/66 MS: 1 ChangeByte- 00:09:10.171 [2024-12-05 20:29:03.422013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.171 [2024-12-05 20:29:03.422044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.171 #52 NEW cov: 12463 ft: 15032 corp: 15/611b lim: 85 exec/s: 52 rss: 75Mb L: 21/66 MS: 1 ChangeByte- 00:09:10.171 [2024-12-05 20:29:03.462195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.171 [2024-12-05 20:29:03.462223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.172 [2024-12-05 20:29:03.462264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.172 [2024-12-05 20:29:03.462281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.172 #55 NEW cov: 12463 ft: 15070 corp: 16/661b lim: 85 exec/s: 55 rss: 75Mb L: 50/66 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:09:10.172 [2024-12-05 20:29:03.502625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.172 [2024-12-05 20:29:03.502653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.172 [2024-12-05 20:29:03.502702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.172 [2024-12-05 20:29:03.502718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.172 [2024-12-05 20:29:03.502769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.172 [2024-12-05 20:29:03.502786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.172 [2024-12-05 20:29:03.502844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:10.172 [2024-12-05 20:29:03.502860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.172 #56 NEW cov: 12463 ft: 15433 corp: 17/740b lim: 85 exec/s: 56 rss: 75Mb L: 79/79 MS: 1 InsertRepeatedBytes- 00:09:10.172 [2024-12-05 20:29:03.562556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.172 [2024-12-05 20:29:03.562583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.172 [2024-12-05 20:29:03.562622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.172 [2024-12-05 20:29:03.562638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.172 #57 NEW cov: 12463 ft: 15450 corp: 18/790b lim: 85 exec/s: 57 rss: 75Mb L: 50/79 MS: 1 ChangeByte- 00:09:10.430 [2024-12-05 20:29:03.622870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.431 [2024-12-05 20:29:03.622898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.622942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.431 [2024-12-05 20:29:03.622958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.623013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.431 [2024-12-05 20:29:03.623048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.683037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.431 [2024-12-05 20:29:03.683064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.683101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.431 [2024-12-05 20:29:03.683118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.683174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.431 [2024-12-05 20:29:03.683190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.431 #59 NEW cov: 12463 ft: 15459 corp: 19/857b lim: 85 exec/s: 59 rss: 75Mb L: 67/79 MS: 2 ChangeBinInt-InsertByte- 00:09:10.431 [2024-12-05 20:29:03.722821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.431 [2024-12-05 20:29:03.722850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.431 #60 NEW cov: 12463 ft: 15477 corp: 20/887b lim: 85 exec/s: 60 rss: 75Mb L: 30/79 MS: 1 EraseBytes- 00:09:10.431 [2024-12-05 20:29:03.762925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.431 [2024-12-05 20:29:03.762954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.431 #61 NEW cov: 12463 ft: 15530 corp: 21/919b lim: 85 exec/s: 61 rss: 75Mb L: 32/79 MS: 1 CrossOver- 00:09:10.431 [2024-12-05 20:29:03.823519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.431 [2024-12-05 20:29:03.823546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.823587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.431 [2024-12-05 20:29:03.823604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.823658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.431 [2024-12-05 20:29:03.823675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.431 [2024-12-05 20:29:03.823730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:10.431 [2024-12-05 20:29:03.823748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.431 #62 NEW cov: 12463 ft: 15598 corp: 22/1000b lim: 85 exec/s: 62 rss: 75Mb L: 81/81 MS: 1 CrossOver- 00:09:10.689 [2024-12-05 20:29:03.883550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.689 [2024-12-05 20:29:03.883580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.689 [2024-12-05 20:29:03.883618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.689 [2024-12-05 20:29:03.883636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.689 [2024-12-05 20:29:03.883693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.689 [2024-12-05 20:29:03.883725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.689 #63 NEW cov: 12463 ft: 15665 corp: 23/1054b lim: 85 exec/s: 63 rss: 75Mb L: 54/81 MS: 1 ShuffleBytes- 00:09:10.690 [2024-12-05 20:29:03.923818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.690 [2024-12-05 20:29:03.923846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.690 [2024-12-05 20:29:03.923897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.690 [2024-12-05 20:29:03.923913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.690 [2024-12-05 20:29:03.923966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.690 [2024-12-05 20:29:03.923982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.690 [2024-12-05 20:29:03.924037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:10.690 [2024-12-05 20:29:03.924053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.690 #64 NEW cov: 12463 ft: 15673 corp: 24/1136b lim: 85 exec/s: 64 rss: 75Mb L: 82/82 MS: 1 InsertRepeatedBytes- 00:09:10.690 [2024-12-05 20:29:03.983861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.690 [2024-12-05 20:29:03.983889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.690 [2024-12-05 20:29:03.983928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.690 [2024-12-05 20:29:03.983945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.690 [2024-12-05 20:29:03.984001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.690 [2024-12-05 20:29:03.984017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.690 #65 NEW cov: 12463 ft: 15675 corp: 25/1195b lim: 85 exec/s: 65 rss: 75Mb L: 59/82 MS: 1 ChangeBit- 00:09:10.690 [2024-12-05 20:29:04.043996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.690 [2024-12-05 20:29:04.044024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.690 [2024-12-05 20:29:04.044072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.690 [2024-12-05 20:29:04.044088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.690 [2024-12-05 20:29:04.044143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.690 [2024-12-05 20:29:04.044160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.690 #66 NEW cov: 12463 ft: 15681 corp: 26/1253b lim: 85 exec/s: 66 rss: 75Mb L: 58/82 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:09:10.690 [2024-12-05 20:29:04.083815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.690 [2024-12-05 20:29:04.083844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.690 #67 NEW cov: 12463 ft: 15698 corp: 27/1274b lim: 85 exec/s: 67 rss: 75Mb L: 21/82 MS: 1 ChangeBit- 00:09:10.948 [2024-12-05 20:29:04.144243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.948 [2024-12-05 20:29:04.144272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.948 [2024-12-05 20:29:04.144313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.948 [2024-12-05 20:29:04.144329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.948 [2024-12-05 20:29:04.144383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.948 [2024-12-05 20:29:04.144415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.948 #73 NEW cov: 12463 ft: 15701 corp: 28/1334b lim: 85 exec/s: 73 rss: 75Mb L: 60/82 MS: 1 CopyPart- 00:09:10.948 [2024-12-05 20:29:04.204601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.948 [2024-12-05 20:29:04.204628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.949 [2024-12-05 20:29:04.204679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.949 [2024-12-05 20:29:04.204695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.949 [2024-12-05 20:29:04.204761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.949 [2024-12-05 20:29:04.204777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.949 [2024-12-05 20:29:04.204832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:10.949 [2024-12-05 20:29:04.204846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.949 #74 NEW cov: 12463 ft: 15710 corp: 29/1415b lim: 85 exec/s: 74 rss: 75Mb L: 81/82 MS: 1 ChangeBit- 00:09:10.949 [2024-12-05 20:29:04.264699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.949 [2024-12-05 20:29:04.264727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.949 [2024-12-05 20:29:04.264777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:10.949 [2024-12-05 20:29:04.264794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:10.949 [2024-12-05 20:29:04.264849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:10.949 [2024-12-05 20:29:04.264864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:10.949 [2024-12-05 20:29:04.264919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:10.949 [2024-12-05 20:29:04.264936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:10.949 #75 NEW cov: 12463 ft: 15735 corp: 30/1496b lim: 85 exec/s: 75 rss: 75Mb L: 81/82 MS: 1 ChangeBit- 00:09:10.949 [2024-12-05 20:29:04.324454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.949 [2024-12-05 20:29:04.324481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:10.949 #76 NEW cov: 12463 ft: 15810 corp: 31/1520b lim: 85 exec/s: 76 rss: 75Mb L: 24/82 MS: 1 PersAutoDict- DE: "\272\342\317\331N\002x\000"- 00:09:10.949 [2024-12-05 20:29:04.364481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:10.949 [2024-12-05 20:29:04.364509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.208 #77 NEW cov: 12463 ft: 15835 corp: 32/1544b lim: 85 exec/s: 77 rss: 76Mb L: 24/82 MS: 1 CopyPart- 00:09:11.208 [2024-12-05 20:29:04.424668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:11.208 [2024-12-05 20:29:04.424696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.208 #78 NEW cov: 12463 ft: 15857 corp: 33/1561b lim: 85 exec/s: 39 rss: 76Mb L: 17/82 MS: 1 EraseBytes- 00:09:11.208 #78 DONE cov: 12463 ft: 15857 corp: 33/1561b lim: 85 exec/s: 39 rss: 76Mb 00:09:11.208 ###### Recommended dictionary. ###### 00:09:11.208 "\272\342\317\331N\002x\000" # Uses: 2 00:09:11.208 "\003\000\000\000" # Uses: 1 00:09:11.208 "\001\000\000\030" # Uses: 0 00:09:11.208 ###### End of recommended dictionary. ###### 00:09:11.208 Done 78 runs in 2 second(s) 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:11.208 20:29:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:09:11.208 [2024-12-05 20:29:04.613613] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:11.208 [2024-12-05 20:29:04.613705] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847747 ] 00:09:11.467 [2024-12-05 20:29:04.819210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.467 [2024-12-05 20:29:04.857456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.726 [2024-12-05 20:29:04.916854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.726 [2024-12-05 20:29:04.933083] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:11.726 INFO: Running with entropic power schedule (0xFF, 100). 00:09:11.726 INFO: Seed: 1373763950 00:09:11.726 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:09:11.726 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:09:11.726 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:11.726 INFO: A corpus is not provided, starting from an empty corpus 00:09:11.726 #2 INITED exec/s: 0 rss: 67Mb 00:09:11.726 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:11.726 This may also happen if the target rejected all inputs we tried so far 00:09:11.726 [2024-12-05 20:29:04.999972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:11.726 [2024-12-05 20:29:05.000013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.726 [2024-12-05 20:29:05.000098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:11.726 [2024-12-05 20:29:05.000117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.985 NEW_FUNC[1/717]: 0x466658 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:11.985 NEW_FUNC[2/717]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:11.985 #3 NEW cov: 12167 ft: 12168 corp: 2/13b lim: 25 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:09:11.986 [2024-12-05 20:29:05.349788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:11.986 [2024-12-05 20:29:05.349856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.986 [2024-12-05 20:29:05.349943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:11.986 [2024-12-05 20:29:05.349974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.986 [2024-12-05 20:29:05.350054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:11.986 [2024-12-05 20:29:05.350082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.986 #4 NEW cov: 12282 ft: 13167 corp: 3/30b lim: 25 exec/s: 0 rss: 74Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:09:11.986 [2024-12-05 20:29:05.399950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:11.986 [2024-12-05 20:29:05.399978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:11.986 [2024-12-05 20:29:05.400049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:11.986 [2024-12-05 20:29:05.400066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:11.986 [2024-12-05 20:29:05.400120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:11.986 [2024-12-05 20:29:05.400135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:11.986 [2024-12-05 20:29:05.400190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:11.986 [2024-12-05 20:29:05.400206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:11.986 [2024-12-05 20:29:05.400273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:11.986 [2024-12-05 20:29:05.400289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.244 #5 NEW cov: 12288 ft: 13910 corp: 4/55b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:09:12.244 [2024-12-05 20:29:05.459705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.244 [2024-12-05 20:29:05.459733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.459777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.244 [2024-12-05 20:29:05.459793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.244 #6 NEW cov: 12373 ft: 14198 corp: 5/67b lim: 25 exec/s: 0 rss: 74Mb L: 12/25 MS: 1 ChangeByte- 00:09:12.244 [2024-12-05 20:29:05.520233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.244 [2024-12-05 20:29:05.520261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.520318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.244 [2024-12-05 20:29:05.520334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.520391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.244 [2024-12-05 20:29:05.520407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.520461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.244 [2024-12-05 20:29:05.520478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.520532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:12.244 [2024-12-05 20:29:05.520547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.244 #7 NEW cov: 12373 ft: 14415 corp: 6/92b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:09:12.244 [2024-12-05 20:29:05.580017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.244 [2024-12-05 20:29:05.580047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.580084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.244 [2024-12-05 20:29:05.580101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.244 #8 NEW cov: 12373 ft: 14478 corp: 7/104b lim: 25 exec/s: 0 rss: 74Mb L: 12/25 MS: 1 ShuffleBytes- 00:09:12.244 [2024-12-05 20:29:05.640418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.244 [2024-12-05 20:29:05.640444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.640510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.244 [2024-12-05 20:29:05.640527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.640581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.244 [2024-12-05 20:29:05.640598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.640650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.244 [2024-12-05 20:29:05.640666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.244 #11 NEW cov: 12373 ft: 14522 corp: 8/126b lim: 25 exec/s: 0 rss: 74Mb L: 22/25 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:09:12.244 [2024-12-05 20:29:05.680565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.244 [2024-12-05 20:29:05.680593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.244 [2024-12-05 20:29:05.680641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.244 [2024-12-05 20:29:05.680658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.245 [2024-12-05 20:29:05.680711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.245 [2024-12-05 20:29:05.680728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.245 [2024-12-05 20:29:05.680784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.245 [2024-12-05 20:29:05.680801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.580 #12 NEW cov: 12373 ft: 14554 corp: 9/148b lim: 25 exec/s: 0 rss: 74Mb L: 22/25 MS: 1 ShuffleBytes- 00:09:12.580 [2024-12-05 20:29:05.740435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.580 [2024-12-05 20:29:05.740463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.740519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.580 [2024-12-05 20:29:05.740535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.580 #13 NEW cov: 12373 ft: 14645 corp: 10/161b lim: 25 exec/s: 0 rss: 74Mb L: 13/25 MS: 1 InsertByte- 00:09:12.580 [2024-12-05 20:29:05.800650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.580 [2024-12-05 20:29:05.800677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.800728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.580 [2024-12-05 20:29:05.800749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.580 #14 NEW cov: 12373 ft: 14744 corp: 11/174b lim: 25 exec/s: 0 rss: 74Mb L: 13/25 MS: 1 InsertByte- 00:09:12.580 [2024-12-05 20:29:05.840986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.580 [2024-12-05 20:29:05.841015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.841066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.580 [2024-12-05 20:29:05.841082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.841137] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.580 [2024-12-05 20:29:05.841153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.841208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.580 [2024-12-05 20:29:05.841225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.580 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:12.580 #15 NEW cov: 12396 ft: 14783 corp: 12/195b lim: 25 exec/s: 0 rss: 74Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:09:12.580 [2024-12-05 20:29:05.900895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.580 [2024-12-05 20:29:05.900921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.900961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.580 [2024-12-05 20:29:05.900976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.580 #16 NEW cov: 12396 ft: 14798 corp: 13/207b lim: 25 exec/s: 0 rss: 74Mb L: 12/25 MS: 1 ChangeBinInt- 00:09:12.580 [2024-12-05 20:29:05.941479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.580 [2024-12-05 20:29:05.941508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.941557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.580 [2024-12-05 20:29:05.941573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.941620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.580 [2024-12-05 20:29:05.941635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.941687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.580 [2024-12-05 20:29:05.941702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.580 [2024-12-05 20:29:05.941760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:12.580 [2024-12-05 20:29:05.941777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.580 #17 NEW cov: 12396 ft: 14818 corp: 14/232b lim: 25 exec/s: 17 rss: 75Mb L: 25/25 MS: 1 ChangeByte- 00:09:12.864 [2024-12-05 20:29:06.001179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.864 [2024-12-05 20:29:06.001208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.864 [2024-12-05 20:29:06.001247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.864 [2024-12-05 20:29:06.001263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.864 #18 NEW cov: 12396 ft: 14860 corp: 15/245b lim: 25 exec/s: 18 rss: 75Mb L: 13/25 MS: 1 InsertByte- 00:09:12.865 [2024-12-05 20:29:06.061350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.865 [2024-12-05 20:29:06.061379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.061420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.865 [2024-12-05 20:29:06.061435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.865 #21 NEW cov: 12396 ft: 14905 corp: 16/258b lim: 25 exec/s: 21 rss: 75Mb L: 13/25 MS: 3 CrossOver-CopyPart-CrossOver- 00:09:12.865 [2024-12-05 20:29:06.101810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.865 [2024-12-05 20:29:06.101839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.101893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.865 [2024-12-05 20:29:06.101909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.101960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.865 [2024-12-05 20:29:06.101976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.102028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.865 [2024-12-05 20:29:06.102043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.102095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:12.865 [2024-12-05 20:29:06.102111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:12.865 #22 NEW cov: 12396 ft: 14920 corp: 17/283b lim: 25 exec/s: 22 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:09:12.865 [2024-12-05 20:29:06.161872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.865 [2024-12-05 20:29:06.161900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.161952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.865 [2024-12-05 20:29:06.161968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.162036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.865 [2024-12-05 20:29:06.162052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.162106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.865 [2024-12-05 20:29:06.162123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.865 #23 NEW cov: 12396 ft: 14942 corp: 18/305b lim: 25 exec/s: 23 rss: 75Mb L: 22/25 MS: 1 ShuffleBytes- 00:09:12.865 [2024-12-05 20:29:06.222021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.865 [2024-12-05 20:29:06.222050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.222098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.865 [2024-12-05 20:29:06.222115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.222167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.865 [2024-12-05 20:29:06.222182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.222237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.865 [2024-12-05 20:29:06.222252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.865 #24 NEW cov: 12396 ft: 14956 corp: 19/327b lim: 25 exec/s: 24 rss: 75Mb L: 22/25 MS: 1 CrossOver- 00:09:12.865 [2024-12-05 20:29:06.262246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:12.865 [2024-12-05 20:29:06.262272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.262329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:12.865 [2024-12-05 20:29:06.262344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.262399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:12.865 [2024-12-05 20:29:06.262416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.262467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:12.865 [2024-12-05 20:29:06.262483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:12.865 [2024-12-05 20:29:06.262536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:12.865 [2024-12-05 20:29:06.262551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:13.145 #25 NEW cov: 12396 ft: 15023 corp: 20/352b lim: 25 exec/s: 25 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:09:13.145 [2024-12-05 20:29:06.322425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.145 [2024-12-05 20:29:06.322455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.322513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.145 [2024-12-05 20:29:06.322529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.322586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.145 [2024-12-05 20:29:06.322601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.322659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.145 [2024-12-05 20:29:06.322675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.322735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:13.145 [2024-12-05 20:29:06.322763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:13.145 #26 NEW cov: 12396 ft: 15052 corp: 21/377b lim: 25 exec/s: 26 rss: 75Mb L: 25/25 MS: 1 ChangeBit- 00:09:13.145 [2024-12-05 20:29:06.362168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.145 [2024-12-05 20:29:06.362197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.362254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.145 [2024-12-05 20:29:06.362270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.145 #27 NEW cov: 12396 ft: 15077 corp: 22/390b lim: 25 exec/s: 27 rss: 75Mb L: 13/25 MS: 1 ChangeBit- 00:09:13.145 [2024-12-05 20:29:06.422308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.145 [2024-12-05 20:29:06.422336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.422373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.145 [2024-12-05 20:29:06.422389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.145 #28 NEW cov: 12396 ft: 15093 corp: 23/402b lim: 25 exec/s: 28 rss: 75Mb L: 12/25 MS: 1 ChangeByte- 00:09:13.145 [2024-12-05 20:29:06.462640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.145 [2024-12-05 20:29:06.462667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.462721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.145 [2024-12-05 20:29:06.462737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.462795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.145 [2024-12-05 20:29:06.462809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.462863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.145 [2024-12-05 20:29:06.462879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.145 #29 NEW cov: 12396 ft: 15105 corp: 24/424b lim: 25 exec/s: 29 rss: 75Mb L: 22/25 MS: 1 ShuffleBytes- 00:09:13.145 [2024-12-05 20:29:06.502896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.145 [2024-12-05 20:29:06.502923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.502978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.145 [2024-12-05 20:29:06.502994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.503048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.145 [2024-12-05 20:29:06.503065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.503120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.145 [2024-12-05 20:29:06.503139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.503195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:13.145 [2024-12-05 20:29:06.503212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:13.145 #30 NEW cov: 12396 ft: 15122 corp: 25/449b lim: 25 exec/s: 30 rss: 75Mb L: 25/25 MS: 1 ChangeBinInt- 00:09:13.145 [2024-12-05 20:29:06.542670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.145 [2024-12-05 20:29:06.542697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.145 [2024-12-05 20:29:06.542736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.145 [2024-12-05 20:29:06.542757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.403 #31 NEW cov: 12396 ft: 15140 corp: 26/461b lim: 25 exec/s: 31 rss: 75Mb L: 12/25 MS: 1 ShuffleBytes- 00:09:13.403 [2024-12-05 20:29:06.603175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.403 [2024-12-05 20:29:06.603202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.403 [2024-12-05 20:29:06.603259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.404 [2024-12-05 20:29:06.603275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.603327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.404 [2024-12-05 20:29:06.603343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.603395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.404 [2024-12-05 20:29:06.603412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.603466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:13.404 [2024-12-05 20:29:06.603481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:13.404 #32 NEW cov: 12396 ft: 15155 corp: 27/486b lim: 25 exec/s: 32 rss: 75Mb L: 25/25 MS: 1 ChangeBinInt- 00:09:13.404 [2024-12-05 20:29:06.643196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.404 [2024-12-05 20:29:06.643222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.643271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.404 [2024-12-05 20:29:06.643288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.643340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.404 [2024-12-05 20:29:06.643356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.643411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.404 [2024-12-05 20:29:06.643427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.404 #33 NEW cov: 12396 ft: 15168 corp: 28/508b lim: 25 exec/s: 33 rss: 75Mb L: 22/25 MS: 1 ShuffleBytes- 00:09:13.404 [2024-12-05 20:29:06.683168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.404 [2024-12-05 20:29:06.683195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.683233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.404 [2024-12-05 20:29:06.683249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.683304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.404 [2024-12-05 20:29:06.683337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.404 #34 NEW cov: 12396 ft: 15171 corp: 29/525b lim: 25 exec/s: 34 rss: 75Mb L: 17/25 MS: 1 CopyPart- 00:09:13.404 [2024-12-05 20:29:06.723154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.404 [2024-12-05 20:29:06.723180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.723219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.404 [2024-12-05 20:29:06.723236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.404 #35 NEW cov: 12396 ft: 15178 corp: 30/537b lim: 25 exec/s: 35 rss: 75Mb L: 12/25 MS: 1 CrossOver- 00:09:13.404 [2024-12-05 20:29:06.783588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.404 [2024-12-05 20:29:06.783615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.783669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.404 [2024-12-05 20:29:06.783685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.783738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.404 [2024-12-05 20:29:06.783757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.783813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.404 [2024-12-05 20:29:06.783829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.404 #36 NEW cov: 12396 ft: 15238 corp: 31/557b lim: 25 exec/s: 36 rss: 75Mb L: 20/25 MS: 1 EraseBytes- 00:09:13.404 [2024-12-05 20:29:06.823718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.404 [2024-12-05 20:29:06.823750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.823800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.404 [2024-12-05 20:29:06.823817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.823870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.404 [2024-12-05 20:29:06.823886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.404 [2024-12-05 20:29:06.823940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.404 [2024-12-05 20:29:06.823954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.670 #37 NEW cov: 12396 ft: 15244 corp: 32/581b lim: 25 exec/s: 37 rss: 75Mb L: 24/25 MS: 1 CrossOver- 00:09:13.670 [2024-12-05 20:29:06.863567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.670 [2024-12-05 20:29:06.863595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.863637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.670 [2024-12-05 20:29:06.863652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.670 #38 NEW cov: 12396 ft: 15279 corp: 33/593b lim: 25 exec/s: 38 rss: 75Mb L: 12/25 MS: 1 ChangeBinInt- 00:09:13.670 [2024-12-05 20:29:06.923956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.670 [2024-12-05 20:29:06.923983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.924037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.670 [2024-12-05 20:29:06.924053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.924108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.670 [2024-12-05 20:29:06.924124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.924180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.670 [2024-12-05 20:29:06.924196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.670 #39 NEW cov: 12396 ft: 15290 corp: 34/615b lim: 25 exec/s: 39 rss: 75Mb L: 22/25 MS: 1 ChangeByte- 00:09:13.670 [2024-12-05 20:29:06.964197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:13.670 [2024-12-05 20:29:06.964227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.964276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:13.670 [2024-12-05 20:29:06.964293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.964367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:13.670 [2024-12-05 20:29:06.964387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.964446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:13.670 [2024-12-05 20:29:06.964463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:13.670 [2024-12-05 20:29:06.964519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:09:13.670 [2024-12-05 20:29:06.964537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:13.670 #40 NEW cov: 12396 ft: 15296 corp: 35/640b lim: 25 exec/s: 20 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:09:13.670 #40 DONE cov: 12396 ft: 15296 corp: 35/640b lim: 25 exec/s: 20 rss: 75Mb 00:09:13.670 Done 40 runs in 2 second(s) 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:13.929 20:29:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:09:13.929 [2024-12-05 20:29:07.155757] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:13.929 [2024-12-05 20:29:07.155819] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848126 ] 00:09:13.929 [2024-12-05 20:29:07.359420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.188 [2024-12-05 20:29:07.397601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.188 [2024-12-05 20:29:07.456697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.188 [2024-12-05 20:29:07.472918] tcp.c:1100:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:14.188 INFO: Running with entropic power schedule (0xFF, 100). 00:09:14.188 INFO: Seed: 3912757188 00:09:14.188 INFO: Loaded 1 modules (390433 inline 8-bit counters): 390433 [0x2c8048c, 0x2cdf9ad), 00:09:14.188 INFO: Loaded 1 PC tables (390433 PCs): 390433 [0x2cdf9b0,0x32d4bc0), 00:09:14.188 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:14.188 INFO: A corpus is not provided, starting from an empty corpus 00:09:14.188 #2 INITED exec/s: 0 rss: 67Mb 00:09:14.188 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:14.188 This may also happen if the target rejected all inputs we tried so far 00:09:14.188 [2024-12-05 20:29:07.528209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.188 [2024-12-05 20:29:07.528242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.446 NEW_FUNC[1/718]: 0x467748 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:14.446 NEW_FUNC[2/718]: 0x4783c8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:14.446 #3 NEW cov: 12241 ft: 12204 corp: 2/25b lim: 100 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:09:14.446 [2024-12-05 20:29:07.859132] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.446 [2024-12-05 20:29:07.859174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.703 #4 NEW cov: 12354 ft: 12710 corp: 3/49b lim: 100 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CopyPart- 00:09:14.703 [2024-12-05 20:29:07.919201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.704 [2024-12-05 20:29:07.919233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.704 #5 NEW cov: 12360 ft: 13078 corp: 4/73b lim: 100 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 ChangeBinInt- 00:09:14.704 [2024-12-05 20:29:07.959298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.704 [2024-12-05 20:29:07.959328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.704 #6 NEW cov: 12445 ft: 13392 corp: 5/97b lim: 100 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 ShuffleBytes- 00:09:14.704 [2024-12-05 20:29:08.019446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.704 [2024-12-05 20:29:08.019475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.704 #12 NEW cov: 12445 ft: 13574 corp: 6/121b lim: 100 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CopyPart- 00:09:14.704 [2024-12-05 20:29:08.059574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.704 [2024-12-05 20:29:08.059602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.704 #13 NEW cov: 12445 ft: 13621 corp: 7/145b lim: 100 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CopyPart- 00:09:14.704 [2024-12-05 20:29:08.119713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044167 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.704 [2024-12-05 20:29:08.119750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.961 #14 NEW cov: 12445 ft: 13677 corp: 8/169b lim: 100 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBit- 00:09:14.961 [2024-12-05 20:29:08.179849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.961 [2024-12-05 20:29:08.179878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.961 #19 NEW cov: 12445 ft: 13723 corp: 9/193b lim: 100 exec/s: 0 rss: 75Mb L: 24/24 MS: 5 ShuffleBytes-ChangeByte-ShuffleBytes-ShuffleBytes-CrossOver- 00:09:14.961 [2024-12-05 20:29:08.220018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.961 [2024-12-05 20:29:08.220046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.961 #20 NEW cov: 12445 ft: 13761 corp: 10/217b lim: 100 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeBinInt- 00:09:14.961 [2024-12-05 20:29:08.260113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.961 [2024-12-05 20:29:08.260142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.961 #21 NEW cov: 12445 ft: 13826 corp: 11/241b lim: 100 exec/s: 0 rss: 75Mb L: 24/24 MS: 1 ChangeByte- 00:09:14.962 [2024-12-05 20:29:08.320294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.962 [2024-12-05 20:29:08.320322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.962 #22 NEW cov: 12445 ft: 13868 corp: 12/279b lim: 100 exec/s: 0 rss: 75Mb L: 38/38 MS: 1 CopyPart- 00:09:14.962 [2024-12-05 20:29:08.380603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.962 [2024-12-05 20:29:08.380632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:14.962 [2024-12-05 20:29:08.380671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9476562641670603651 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.962 [2024-12-05 20:29:08.380688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.219 NEW_FUNC[1/1]: 0x1c54458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:15.219 #23 NEW cov: 12468 ft: 14686 corp: 13/324b lim: 100 exec/s: 0 rss: 75Mb L: 45/45 MS: 1 CopyPart- 00:09:15.219 [2024-12-05 20:29:08.420547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.420575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.219 #24 NEW cov: 12468 ft: 14713 corp: 14/348b lim: 100 exec/s: 0 rss: 75Mb L: 24/45 MS: 1 ChangeBinInt- 00:09:15.219 [2024-12-05 20:29:08.460973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9511602410917954435 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.461000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.219 [2024-12-05 20:29:08.461052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65412 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.461069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.219 [2024-12-05 20:29:08.461123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.461138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.219 #25 NEW cov: 12468 ft: 15084 corp: 15/412b lim: 100 exec/s: 0 rss: 75Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:09:15.219 [2024-12-05 20:29:08.520842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9439826296151704451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.520870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.219 #26 NEW cov: 12468 ft: 15120 corp: 16/436b lim: 100 exec/s: 26 rss: 75Mb L: 24/64 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:09:15.219 [2024-12-05 20:29:08.560959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.560988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.219 #27 NEW cov: 12468 ft: 15194 corp: 17/461b lim: 100 exec/s: 27 rss: 75Mb L: 25/64 MS: 1 InsertByte- 00:09:15.219 [2024-12-05 20:29:08.621241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.621271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.219 [2024-12-05 20:29:08.621335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.219 [2024-12-05 20:29:08.621352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.476 #28 NEW cov: 12468 ft: 15211 corp: 18/506b lim: 100 exec/s: 28 rss: 75Mb L: 45/64 MS: 1 ShuffleBytes- 00:09:15.476 [2024-12-05 20:29:08.681391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9511602410917954435 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.681418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.476 [2024-12-05 20:29:08.681465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.681483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.476 #29 NEW cov: 12468 ft: 15227 corp: 19/556b lim: 100 exec/s: 29 rss: 75Mb L: 50/64 MS: 1 EraseBytes- 00:09:15.476 [2024-12-05 20:29:08.741402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.741430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.476 #30 NEW cov: 12468 ft: 15236 corp: 20/581b lim: 100 exec/s: 30 rss: 75Mb L: 25/64 MS: 1 InsertByte- 00:09:15.476 [2024-12-05 20:29:08.781523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4039947479594566226 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.781552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.476 #36 NEW cov: 12468 ft: 15246 corp: 21/613b lim: 100 exec/s: 36 rss: 75Mb L: 32/64 MS: 1 CMP- DE: "\000x\002R8\020\306\324"- 00:09:15.476 [2024-12-05 20:29:08.822124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9511602410917954435 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.822153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.476 [2024-12-05 20:29:08.822201] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65412 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.822217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.476 [2024-12-05 20:29:08.822274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2197815296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.822290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.476 [2024-12-05 20:29:08.822348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.822364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:15.476 #37 NEW cov: 12468 ft: 15625 corp: 22/706b lim: 100 exec/s: 37 rss: 75Mb L: 93/93 MS: 1 InsertRepeatedBytes- 00:09:15.476 [2024-12-05 20:29:08.861870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.861902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.476 [2024-12-05 20:29:08.861945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.476 [2024-12-05 20:29:08.861962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.476 #38 NEW cov: 12468 ft: 15643 corp: 23/751b lim: 100 exec/s: 38 rss: 75Mb L: 45/93 MS: 1 ChangeByte- 00:09:15.734 [2024-12-05 20:29:08.921898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:08.921928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.734 #39 NEW cov: 12468 ft: 15689 corp: 24/775b lim: 100 exec/s: 39 rss: 75Mb L: 24/93 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:09:15.734 [2024-12-05 20:29:08.961998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:08.962028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.734 #40 NEW cov: 12468 ft: 15712 corp: 25/799b lim: 100 exec/s: 40 rss: 75Mb L: 24/93 MS: 1 ChangeBinInt- 00:09:15.734 [2024-12-05 20:29:09.022626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9511602410917954435 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:09.022657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.734 [2024-12-05 20:29:09.022725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069414584575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:09.022750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.734 [2024-12-05 20:29:09.022820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2206434179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:09.022837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:15.734 [2024-12-05 20:29:09.022895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:09.022911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:15.734 #41 NEW cov: 12468 ft: 15762 corp: 26/895b lim: 100 exec/s: 41 rss: 75Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:09:15.734 [2024-12-05 20:29:09.092366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:09.092397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.734 #42 NEW cov: 12468 ft: 15810 corp: 27/924b lim: 100 exec/s: 42 rss: 75Mb L: 29/96 MS: 1 EraseBytes- 00:09:15.734 [2024-12-05 20:29:09.132460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.734 [2024-12-05 20:29:09.132490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.734 #43 NEW cov: 12468 ft: 15821 corp: 28/955b lim: 100 exec/s: 43 rss: 75Mb L: 31/96 MS: 1 CrossOver- 00:09:15.993 [2024-12-05 20:29:09.172602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.993 [2024-12-05 20:29:09.172644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.993 #44 NEW cov: 12468 ft: 15830 corp: 29/976b lim: 100 exec/s: 44 rss: 76Mb L: 21/96 MS: 1 EraseBytes- 00:09:15.993 [2024-12-05 20:29:09.232784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.993 [2024-12-05 20:29:09.232812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.993 #45 NEW cov: 12468 ft: 15835 corp: 30/1000b lim: 100 exec/s: 45 rss: 76Mb L: 24/96 MS: 1 ShuffleBytes- 00:09:15.993 [2024-12-05 20:29:09.292931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.993 [2024-12-05 20:29:09.292959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.993 #46 NEW cov: 12468 ft: 15842 corp: 31/1024b lim: 100 exec/s: 46 rss: 76Mb L: 24/96 MS: 1 ShuffleBytes- 00:09:15.993 [2024-12-05 20:29:09.333020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.993 [2024-12-05 20:29:09.333049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.993 #47 NEW cov: 12468 ft: 15874 corp: 32/1049b lim: 100 exec/s: 47 rss: 76Mb L: 25/96 MS: 1 ShuffleBytes- 00:09:15.993 [2024-12-05 20:29:09.393534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.993 [2024-12-05 20:29:09.393563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:15.993 [2024-12-05 20:29:09.393600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9439688958071047043 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.993 [2024-12-05 20:29:09.393617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:15.993 [2024-12-05 20:29:09.393673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9476562641788041596 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.993 [2024-12-05 20:29:09.393691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:16.251 #48 NEW cov: 12468 ft: 15879 corp: 33/1118b lim: 100 exec/s: 48 rss: 76Mb L: 69/96 MS: 1 CrossOver- 00:09:16.251 [2024-12-05 20:29:09.453351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.251 [2024-12-05 20:29:09.453380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.251 #49 NEW cov: 12468 ft: 15884 corp: 34/1150b lim: 100 exec/s: 49 rss: 76Mb L: 32/96 MS: 1 PersAutoDict- DE: "\000x\002R8\020\306\324"- 00:09:16.251 [2024-12-05 20:29:09.513531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9476562641788044163 len:33668 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.251 [2024-12-05 20:29:09.513558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:16.251 #50 NEW cov: 12468 ft: 15887 corp: 35/1174b lim: 100 exec/s: 25 rss: 76Mb L: 24/96 MS: 1 ChangeByte- 00:09:16.251 #50 DONE cov: 12468 ft: 15887 corp: 35/1174b lim: 100 exec/s: 25 rss: 76Mb 00:09:16.251 ###### Recommended dictionary. ###### 00:09:16.251 "\001\000\000\000\000\000\000\000" # Uses: 1 00:09:16.251 "\000x\002R8\020\306\324" # Uses: 1 00:09:16.251 ###### End of recommended dictionary. ###### 00:09:16.251 Done 50 runs in 2 second(s) 00:09:16.251 20:29:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:09:16.251 20:29:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:16.251 20:29:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:16.251 20:29:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:09:16.251 00:09:16.251 real 1m4.749s 00:09:16.251 user 1m40.048s 00:09:16.251 sys 0m8.074s 00:09:16.251 20:29:09 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.251 20:29:09 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:16.251 ************************************ 00:09:16.251 END TEST nvmf_llvm_fuzz 00:09:16.251 ************************************ 00:09:16.510 20:29:09 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:16.510 20:29:09 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:16.510 20:29:09 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:16.510 20:29:09 llvm_fuzz -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.510 20:29:09 llvm_fuzz -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.510 20:29:09 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:16.510 ************************************ 00:09:16.510 START TEST vfio_llvm_fuzz 00:09:16.510 ************************************ 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:16.510 * Looking for test storage... 00:09:16.510 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.510 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.511 --rc genhtml_branch_coverage=1 00:09:16.511 --rc genhtml_function_coverage=1 00:09:16.511 --rc genhtml_legend=1 00:09:16.511 --rc geninfo_all_blocks=1 00:09:16.511 --rc geninfo_unexecuted_blocks=1 00:09:16.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.511 ' 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.511 --rc genhtml_branch_coverage=1 00:09:16.511 --rc genhtml_function_coverage=1 00:09:16.511 --rc genhtml_legend=1 00:09:16.511 --rc geninfo_all_blocks=1 00:09:16.511 --rc geninfo_unexecuted_blocks=1 00:09:16.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.511 ' 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.511 --rc genhtml_branch_coverage=1 00:09:16.511 --rc genhtml_function_coverage=1 00:09:16.511 --rc genhtml_legend=1 00:09:16.511 --rc geninfo_all_blocks=1 00:09:16.511 --rc geninfo_unexecuted_blocks=1 00:09:16.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.511 ' 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.511 --rc genhtml_branch_coverage=1 00:09:16.511 --rc genhtml_function_coverage=1 00:09:16.511 --rc genhtml_legend=1 00:09:16.511 --rc geninfo_all_blocks=1 00:09:16.511 --rc geninfo_unexecuted_blocks=1 00:09:16.511 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.511 ' 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:16.511 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:16.512 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:16.512 #define SPDK_CONFIG_H 00:09:16.512 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:16.512 #define SPDK_CONFIG_APPS 1 00:09:16.512 #define SPDK_CONFIG_ARCH native 00:09:16.512 #undef SPDK_CONFIG_ASAN 00:09:16.512 #undef SPDK_CONFIG_AVAHI 00:09:16.512 #undef SPDK_CONFIG_CET 00:09:16.512 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:16.512 #define SPDK_CONFIG_COVERAGE 1 00:09:16.512 #define SPDK_CONFIG_CROSS_PREFIX 00:09:16.512 #undef SPDK_CONFIG_CRYPTO 00:09:16.512 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:16.512 #undef SPDK_CONFIG_CUSTOMOCF 00:09:16.512 #undef SPDK_CONFIG_DAOS 00:09:16.512 #define SPDK_CONFIG_DAOS_DIR 00:09:16.512 #define SPDK_CONFIG_DEBUG 1 00:09:16.512 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:16.512 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:16.512 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:16.512 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:16.512 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:16.512 #undef SPDK_CONFIG_DPDK_UADK 00:09:16.512 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:16.512 #define SPDK_CONFIG_EXAMPLES 1 00:09:16.512 #undef SPDK_CONFIG_FC 00:09:16.512 #define SPDK_CONFIG_FC_PATH 00:09:16.512 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:16.512 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:16.512 #define SPDK_CONFIG_FSDEV 1 00:09:16.512 #undef SPDK_CONFIG_FUSE 00:09:16.512 #define SPDK_CONFIG_FUZZER 1 00:09:16.512 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:16.512 #undef SPDK_CONFIG_GOLANG 00:09:16.513 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:16.513 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:16.513 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:16.513 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:16.513 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:16.513 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:16.513 #undef SPDK_CONFIG_HAVE_LZ4 00:09:16.513 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:16.513 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:16.513 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:16.513 #define SPDK_CONFIG_IDXD 1 00:09:16.513 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:16.513 #undef SPDK_CONFIG_IPSEC_MB 00:09:16.513 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:16.513 #define SPDK_CONFIG_ISAL 1 00:09:16.513 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:16.513 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:16.513 #define SPDK_CONFIG_LIBDIR 00:09:16.513 #undef SPDK_CONFIG_LTO 00:09:16.513 #define SPDK_CONFIG_MAX_LCORES 128 00:09:16.513 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:16.513 #define SPDK_CONFIG_NVME_CUSE 1 00:09:16.513 #undef SPDK_CONFIG_OCF 00:09:16.513 #define SPDK_CONFIG_OCF_PATH 00:09:16.513 #define SPDK_CONFIG_OPENSSL_PATH 00:09:16.513 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:16.513 #define SPDK_CONFIG_PGO_DIR 00:09:16.513 #undef SPDK_CONFIG_PGO_USE 00:09:16.513 #define SPDK_CONFIG_PREFIX /usr/local 00:09:16.513 #undef SPDK_CONFIG_RAID5F 00:09:16.513 #undef SPDK_CONFIG_RBD 00:09:16.513 #define SPDK_CONFIG_RDMA 1 00:09:16.513 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:16.513 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:16.513 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:16.513 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:16.513 #undef SPDK_CONFIG_SHARED 00:09:16.513 #undef SPDK_CONFIG_SMA 00:09:16.513 #define SPDK_CONFIG_TESTS 1 00:09:16.513 #undef SPDK_CONFIG_TSAN 00:09:16.513 #define SPDK_CONFIG_UBLK 1 00:09:16.513 #define SPDK_CONFIG_UBSAN 1 00:09:16.513 #undef SPDK_CONFIG_UNIT_TESTS 00:09:16.513 #undef SPDK_CONFIG_URING 00:09:16.513 #define SPDK_CONFIG_URING_PATH 00:09:16.513 #undef SPDK_CONFIG_URING_ZNS 00:09:16.513 #undef SPDK_CONFIG_USDT 00:09:16.513 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:16.513 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:16.513 #define SPDK_CONFIG_VFIO_USER 1 00:09:16.513 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:16.513 #define SPDK_CONFIG_VHOST 1 00:09:16.513 #define SPDK_CONFIG_VIRTIO 1 00:09:16.513 #undef SPDK_CONFIG_VTUNE 00:09:16.513 #define SPDK_CONFIG_VTUNE_DIR 00:09:16.513 #define SPDK_CONFIG_WERROR 1 00:09:16.513 #define SPDK_CONFIG_WPDK_DIR 00:09:16.513 #undef SPDK_CONFIG_XNVME 00:09:16.513 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:16.513 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:16.774 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:16.774 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:16.774 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:16.774 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:16.774 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:09:16.775 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # : 0 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:16.776 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@206 -- # cat 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # _LCOV= 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # [[ 1 -eq 1 ]] 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _LCOV=1 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@275 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # export valgrind= 00:09:16.777 20:29:09 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # valgrind= 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # uname -s 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@289 -- # MAKE=make 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # [[ -z 1848525 ]] 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # kill -0 1848525 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.lU3cG4 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.lU3cG4/tests/vfio /tmp/spdk.lU3cG4 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # df -T 00:09:16.777 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=49477722112 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=61734400000 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12256677888 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30862434304 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30867197952 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=12340969472 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346880000 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=5910528 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=30866624512 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=30867202048 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=577536 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # avails["$mount"]=6173425664 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173437952 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:16.778 * Looking for test storage... 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # mount=/ 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # target_space=49477722112 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@394 -- # new_size=14471270400 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:16.778 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@402 -- # return 0 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1703 -- # true 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:16.778 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.779 --rc genhtml_branch_coverage=1 00:09:16.779 --rc genhtml_function_coverage=1 00:09:16.779 --rc genhtml_legend=1 00:09:16.779 --rc geninfo_all_blocks=1 00:09:16.779 --rc geninfo_unexecuted_blocks=1 00:09:16.779 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.779 ' 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.779 --rc genhtml_branch_coverage=1 00:09:16.779 --rc genhtml_function_coverage=1 00:09:16.779 --rc genhtml_legend=1 00:09:16.779 --rc geninfo_all_blocks=1 00:09:16.779 --rc geninfo_unexecuted_blocks=1 00:09:16.779 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.779 ' 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.779 --rc genhtml_branch_coverage=1 00:09:16.779 --rc genhtml_function_coverage=1 00:09:16.779 --rc genhtml_legend=1 00:09:16.779 --rc geninfo_all_blocks=1 00:09:16.779 --rc geninfo_unexecuted_blocks=1 00:09:16.779 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.779 ' 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.779 --rc genhtml_branch_coverage=1 00:09:16.779 --rc genhtml_function_coverage=1 00:09:16.779 --rc genhtml_legend=1 00:09:16.779 --rc geninfo_all_blocks=1 00:09:16.779 --rc geninfo_unexecuted_blocks=1 00:09:16.779 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:16.779 ' 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:16.779 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:16.779 20:29:10 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:16.779 [2024-12-05 20:29:10.199566] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:16.779 [2024-12-05 20:29:10.199642] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848590 ] 00:09:17.038 [2024-12-05 20:29:10.282557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.038 [2024-12-05 20:29:10.332619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.297 INFO: Running with entropic power schedule (0xFF, 100). 00:09:17.297 INFO: Seed: 2666818101 00:09:17.297 INFO: Loaded 1 modules (387669 inline 8-bit counters): 387669 [0x2c40c8c, 0x2c9f6e1), 00:09:17.297 INFO: Loaded 1 PC tables (387669 PCs): 387669 [0x2c9f6e8,0x3289c38), 00:09:17.297 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:17.297 INFO: A corpus is not provided, starting from an empty corpus 00:09:17.297 #2 INITED exec/s: 0 rss: 67Mb 00:09:17.297 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:17.297 This may also happen if the target rejected all inputs we tried so far 00:09:17.297 [2024-12-05 20:29:10.588846] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:09:17.814 NEW_FUNC[1/675]: 0x43b608 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:09:17.814 NEW_FUNC[2/675]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:17.814 #14 NEW cov: 11241 ft: 11209 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:18.073 NEW_FUNC[1/1]: 0x15d38d8 in cq_free_slots /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1718 00:09:18.073 #20 NEW cov: 11262 ft: 14916 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeByte- 00:09:18.073 NEW_FUNC[1/1]: 0x1c208a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:18.073 #21 NEW cov: 11279 ft: 15407 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:09:18.332 #22 NEW cov: 11279 ft: 16537 corp: 5/25b lim: 6 exec/s: 22 rss: 76Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:09:18.590 #23 NEW cov: 11279 ft: 17020 corp: 6/31b lim: 6 exec/s: 23 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:09:18.850 #27 NEW cov: 11279 ft: 17210 corp: 7/37b lim: 6 exec/s: 27 rss: 77Mb L: 6/6 MS: 4 EraseBytes-InsertByte-CrossOver-CrossOver- 00:09:18.850 #28 NEW cov: 11279 ft: 18088 corp: 8/43b lim: 6 exec/s: 28 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:09:19.109 #30 NEW cov: 11296 ft: 18352 corp: 9/49b lim: 6 exec/s: 30 rss: 77Mb L: 6/6 MS: 2 EraseBytes-CrossOver- 00:09:19.367 #31 NEW cov: 11296 ft: 18631 corp: 10/55b lim: 6 exec/s: 15 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:09:19.367 #31 DONE cov: 11296 ft: 18631 corp: 10/55b lim: 6 exec/s: 15 rss: 77Mb 00:09:19.367 Done 31 runs in 2 second(s) 00:09:19.367 [2024-12-05 20:29:12.655970] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:09:19.626 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:19.626 20:29:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:09:19.626 [2024-12-05 20:29:12.946052] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:19.626 [2024-12-05 20:29:12.946133] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848956 ] 00:09:19.626 [2024-12-05 20:29:13.029503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.885 [2024-12-05 20:29:13.077614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.885 INFO: Running with entropic power schedule (0xFF, 100). 00:09:19.885 INFO: Seed: 1121845125 00:09:19.885 INFO: Loaded 1 modules (387669 inline 8-bit counters): 387669 [0x2c40c8c, 0x2c9f6e1), 00:09:19.885 INFO: Loaded 1 PC tables (387669 PCs): 387669 [0x2c9f6e8,0x3289c38), 00:09:19.885 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:19.885 INFO: A corpus is not provided, starting from an empty corpus 00:09:19.885 #2 INITED exec/s: 0 rss: 68Mb 00:09:19.885 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:19.885 This may also happen if the target rejected all inputs we tried so far 00:09:20.144 [2024-12-05 20:29:13.348782] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:09:20.144 [2024-12-05 20:29:13.410790] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:20.144 [2024-12-05 20:29:13.410818] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:20.144 [2024-12-05 20:29:13.410838] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:20.402 NEW_FUNC[1/678]: 0x43bba8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:09:20.402 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:20.402 #6 NEW cov: 11240 ft: 10926 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 ShuffleBytes-ChangeByte-CMP-CopyPart- DE: "\030\000"- 00:09:20.661 [2024-12-05 20:29:13.907331] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:20.661 [2024-12-05 20:29:13.907375] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:20.661 [2024-12-05 20:29:13.907394] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:20.661 #7 NEW cov: 11254 ft: 14542 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:09:20.920 [2024-12-05 20:29:14.109323] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:20.920 [2024-12-05 20:29:14.109350] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:20.920 [2024-12-05 20:29:14.109369] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:20.920 NEW_FUNC[1/1]: 0x1c208a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:20.920 #13 NEW cov: 11271 ft: 15385 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 PersAutoDict- DE: "\030\000"- 00:09:20.920 [2024-12-05 20:29:14.301329] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:20.920 [2024-12-05 20:29:14.301359] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:20.920 [2024-12-05 20:29:14.301377] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:21.178 #15 NEW cov: 11271 ft: 16491 corp: 5/17b lim: 4 exec/s: 15 rss: 76Mb L: 4/4 MS: 2 CopyPart-PersAutoDict- DE: "\030\000"- 00:09:21.178 [2024-12-05 20:29:14.513341] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:21.178 [2024-12-05 20:29:14.513367] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:21.178 [2024-12-05 20:29:14.513386] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:21.437 #16 NEW cov: 11271 ft: 17129 corp: 6/21b lim: 4 exec/s: 16 rss: 76Mb L: 4/4 MS: 1 CMP- DE: "\001\000\000\001"- 00:09:21.437 [2024-12-05 20:29:14.713453] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:21.437 [2024-12-05 20:29:14.713476] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:21.437 [2024-12-05 20:29:14.713494] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:21.437 #22 NEW cov: 11271 ft: 17197 corp: 7/25b lim: 4 exec/s: 22 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:09:21.695 [2024-12-05 20:29:14.907257] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:21.695 [2024-12-05 20:29:14.907279] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:21.695 [2024-12-05 20:29:14.907297] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:21.695 #23 NEW cov: 11271 ft: 17798 corp: 8/29b lim: 4 exec/s: 23 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:09:21.695 [2024-12-05 20:29:15.107211] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:21.695 [2024-12-05 20:29:15.107234] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:21.695 [2024-12-05 20:29:15.107253] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:21.953 #24 NEW cov: 11278 ft: 17886 corp: 9/33b lim: 4 exec/s: 24 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:21.953 [2024-12-05 20:29:15.297071] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:21.953 [2024-12-05 20:29:15.297094] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:21.953 [2024-12-05 20:29:15.297112] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:22.211 #25 NEW cov: 11278 ft: 18275 corp: 10/37b lim: 4 exec/s: 12 rss: 76Mb L: 4/4 MS: 1 CopyPart- 00:09:22.211 #25 DONE cov: 11278 ft: 18275 corp: 10/37b lim: 4 exec/s: 12 rss: 76Mb 00:09:22.211 ###### Recommended dictionary. ###### 00:09:22.211 "\030\000" # Uses: 3 00:09:22.211 "\001\000\000\001" # Uses: 0 00:09:22.211 ###### End of recommended dictionary. ###### 00:09:22.211 Done 25 runs in 2 second(s) 00:09:22.211 [2024-12-05 20:29:15.434976] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:09:22.470 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:22.470 20:29:15 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:09:22.470 [2024-12-05 20:29:15.726553] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:22.470 [2024-12-05 20:29:15.726634] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849331 ] 00:09:22.470 [2024-12-05 20:29:15.809000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.470 [2024-12-05 20:29:15.855607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.729 INFO: Running with entropic power schedule (0xFF, 100). 00:09:22.729 INFO: Seed: 3896838896 00:09:22.729 INFO: Loaded 1 modules (387669 inline 8-bit counters): 387669 [0x2c40c8c, 0x2c9f6e1), 00:09:22.729 INFO: Loaded 1 PC tables (387669 PCs): 387669 [0x2c9f6e8,0x3289c38), 00:09:22.729 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:22.729 INFO: A corpus is not provided, starting from an empty corpus 00:09:22.729 #2 INITED exec/s: 0 rss: 67Mb 00:09:22.729 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:22.729 This may also happen if the target rejected all inputs we tried so far 00:09:22.729 [2024-12-05 20:29:16.113759] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:09:22.988 [2024-12-05 20:29:16.183287] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:23.246 NEW_FUNC[1/677]: 0x43c598 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:09:23.246 NEW_FUNC[2/677]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:23.246 #20 NEW cov: 11227 ft: 11182 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 3 InsertRepeatedBytes-InsertByte-CrossOver- 00:09:23.246 [2024-12-05 20:29:16.673758] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:23.505 #26 NEW cov: 11241 ft: 14251 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:09:23.505 [2024-12-05 20:29:16.866876] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:23.763 NEW_FUNC[1/1]: 0x1c208a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:23.763 #27 NEW cov: 11258 ft: 14960 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:09:23.763 [2024-12-05 20:29:17.057746] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:23.763 #38 NEW cov: 11258 ft: 15140 corp: 5/33b lim: 8 exec/s: 38 rss: 76Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:24.022 [2024-12-05 20:29:17.252146] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:24.022 #59 NEW cov: 11258 ft: 15392 corp: 6/41b lim: 8 exec/s: 59 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:09:24.022 [2024-12-05 20:29:17.444751] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:24.281 #60 NEW cov: 11258 ft: 15492 corp: 7/49b lim: 8 exec/s: 60 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:24.281 [2024-12-05 20:29:17.639369] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:24.539 #64 NEW cov: 11258 ft: 15832 corp: 8/57b lim: 8 exec/s: 64 rss: 77Mb L: 8/8 MS: 4 EraseBytes-ChangeBinInt-CopyPart-CopyPart- 00:09:24.539 [2024-12-05 20:29:17.840550] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:24.539 #65 NEW cov: 11265 ft: 16664 corp: 9/65b lim: 8 exec/s: 65 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:24.798 [2024-12-05 20:29:18.038376] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:24.798 #66 NEW cov: 11265 ft: 16728 corp: 10/73b lim: 8 exec/s: 33 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:09:24.798 #66 DONE cov: 11265 ft: 16728 corp: 10/73b lim: 8 exec/s: 33 rss: 77Mb 00:09:24.798 Done 66 runs in 2 second(s) 00:09:24.798 [2024-12-05 20:29:18.170968] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:09:25.058 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:25.058 20:29:18 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:09:25.058 [2024-12-05 20:29:18.460738] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:25.058 [2024-12-05 20:29:18.460817] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849707 ] 00:09:25.317 [2024-12-05 20:29:18.542422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.317 [2024-12-05 20:29:18.587166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.575 INFO: Running with entropic power schedule (0xFF, 100). 00:09:25.575 INFO: Seed: 2324883570 00:09:25.575 INFO: Loaded 1 modules (387669 inline 8-bit counters): 387669 [0x2c40c8c, 0x2c9f6e1), 00:09:25.575 INFO: Loaded 1 PC tables (387669 PCs): 387669 [0x2c9f6e8,0x3289c38), 00:09:25.575 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:09:25.575 INFO: A corpus is not provided, starting from an empty corpus 00:09:25.575 #2 INITED exec/s: 0 rss: 68Mb 00:09:25.575 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:25.575 This may also happen if the target rejected all inputs we tried so far 00:09:25.575 [2024-12-05 20:29:18.844037] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:09:26.091 NEW_FUNC[1/676]: 0x43cc88 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:09:26.092 NEW_FUNC[2/676]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:26.092 #41 NEW cov: 11234 ft: 10869 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 4 CopyPart-CMP-ChangeBinInt-InsertRepeatedBytes- DE: "\000\000\000\000\000\000\002\000"- 00:09:26.092 NEW_FUNC[1/1]: 0x1f763a8 in spdk_get_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1282 00:09:26.092 #47 NEW cov: 11249 ft: 13990 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:26.350 NEW_FUNC[1/1]: 0x1c208a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:26.350 #53 NEW cov: 11266 ft: 15167 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:26.609 [2024-12-05 20:29:19.802594] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18446744073709551615 > max 8796093022208 00:09:26.609 [2024-12-05 20:29:19.802638] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffffff00, 0xfffffffffffffeff) offset=0xa04ff0000000000 flags=0x3: No space left on device 00:09:26.609 [2024-12-05 20:29:19.802651] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:09:26.609 [2024-12-05 20:29:19.802669] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:26.609 NEW_FUNC[1/1]: 0x158fbc8 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3131 00:09:26.609 #54 NEW cov: 11277 ft: 15284 corp: 5/129b lim: 32 exec/s: 54 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\002\000"- 00:09:26.609 [2024-12-05 20:29:20.004709] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18446744073709551360 > max 8796093022208 00:09:26.609 [2024-12-05 20:29:20.004739] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x200000000000000, 0x1ffffffffffff00) offset=0xa04ff0000000000 flags=0x3: No space left on device 00:09:26.609 [2024-12-05 20:29:20.004762] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:09:26.609 [2024-12-05 20:29:20.004781] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:26.867 #55 NEW cov: 11277 ft: 16417 corp: 6/161b lim: 32 exec/s: 55 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\002\000"- 00:09:27.126 #56 NEW cov: 11277 ft: 17023 corp: 7/193b lim: 32 exec/s: 56 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:09:27.126 [2024-12-05 20:29:20.396437] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18392700878181105663 > max 8796093022208 00:09:27.126 [2024-12-05 20:29:20.396472] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0xffffffffffffff00, 0xff3ffffffffffeff) offset=0xa04ff0000000000 flags=0x3: No space left on device 00:09:27.126 [2024-12-05 20:29:20.396485] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:09:27.126 [2024-12-05 20:29:20.396506] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:27.126 #62 NEW cov: 11277 ft: 17198 corp: 8/225b lim: 32 exec/s: 62 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:09:27.384 [2024-12-05 20:29:20.600996] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 72057594037927680 > max 8796093022208 00:09:27.384 [2024-12-05 20:29:20.601021] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x200000000000000, 0x2ffffffffffff00) offset=0xa00020000000000 flags=0x3: No space left on device 00:09:27.384 [2024-12-05 20:29:20.601033] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:09:27.384 [2024-12-05 20:29:20.601049] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:27.384 #63 NEW cov: 11284 ft: 17630 corp: 9/257b lim: 32 exec/s: 63 rss: 76Mb L: 32/32 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\002\000"- 00:09:27.384 [2024-12-05 20:29:20.795809] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: DMA region size 18446744073709551360 > max 8796093022208 00:09:27.384 [2024-12-05 20:29:20.795833] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: failed to add DMA region [0x20000000000, 0x1ffffffff00) offset=0xa04ff0000000000 flags=0x3: No space left on device 00:09:27.384 [2024-12-05 20:29:20.795845] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-3/domain/1: msg0: cmd 2 failed: No space left on device 00:09:27.384 [2024-12-05 20:29:20.795861] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:27.641 #64 pulse cov: 11284 ft: 17722 corp: 9/257b lim: 32 exec/s: 32 rss: 77Mb 00:09:27.641 #64 NEW cov: 11284 ft: 17722 corp: 10/289b lim: 32 exec/s: 32 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:27.641 #64 DONE cov: 11284 ft: 17722 corp: 10/289b lim: 32 exec/s: 32 rss: 77Mb 00:09:27.641 ###### Recommended dictionary. ###### 00:09:27.641 "\000\000\000\000\000\000\002\000" # Uses: 5 00:09:27.641 ###### End of recommended dictionary. ###### 00:09:27.641 Done 64 runs in 2 second(s) 00:09:27.641 [2024-12-05 20:29:20.933958] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:09:27.899 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:27.899 20:29:21 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:09:27.899 [2024-12-05 20:29:21.206242] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:27.899 [2024-12-05 20:29:21.206323] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850077 ] 00:09:27.899 [2024-12-05 20:29:21.288622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.899 [2024-12-05 20:29:21.335470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.158 INFO: Running with entropic power schedule (0xFF, 100). 00:09:28.158 INFO: Seed: 786915249 00:09:28.158 INFO: Loaded 1 modules (387669 inline 8-bit counters): 387669 [0x2c40c8c, 0x2c9f6e1), 00:09:28.158 INFO: Loaded 1 PC tables (387669 PCs): 387669 [0x2c9f6e8,0x3289c38), 00:09:28.158 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:09:28.158 INFO: A corpus is not provided, starting from an empty corpus 00:09:28.158 #2 INITED exec/s: 0 rss: 67Mb 00:09:28.158 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:28.158 This may also happen if the target rejected all inputs we tried so far 00:09:28.417 [2024-12-05 20:29:21.597135] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:09:28.417 [2024-12-05 20:29:21.656812] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=304 offset=0x6000000000000000 prot=0x3: Invalid argument 00:09:28.417 [2024-12-05 20:29:21.656841] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6000000000000000 flags=0x3: Invalid argument 00:09:28.417 [2024-12-05 20:29:21.656852] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:09:28.417 [2024-12-05 20:29:21.656887] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:28.417 [2024-12-05 20:29:21.657794] vfio_user.c:3141:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:09:28.417 [2024-12-05 20:29:21.657808] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:09:28.417 [2024-12-05 20:29:21.657824] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:09:28.675 NEW_FUNC[1/678]: 0x43d508 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:09:28.675 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:28.675 #69 NEW cov: 11243 ft: 11010 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:28.933 #70 NEW cov: 11257 ft: 14063 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:09:29.192 NEW_FUNC[1/1]: 0x1c208a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:29.192 #76 NEW cov: 11277 ft: 15200 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:29.450 #82 NEW cov: 11277 ft: 15844 corp: 5/129b lim: 32 exec/s: 82 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:09:29.450 #83 NEW cov: 11277 ft: 16042 corp: 6/161b lim: 32 exec/s: 83 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:29.708 [2024-12-05 20:29:22.903527] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=306 offset=0x6000000000000000 prot=0x3: Invalid argument 00:09:29.708 [2024-12-05 20:29:22.903571] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6000000000000000 flags=0x3: Invalid argument 00:09:29.708 [2024-12-05 20:29:22.903586] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:09:29.708 [2024-12-05 20:29:22.903604] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:29.708 [2024-12-05 20:29:22.904551] vfio_user.c:3141:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:09:29.708 [2024-12-05 20:29:22.904571] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:09:29.708 [2024-12-05 20:29:22.904587] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:09:29.708 #84 NEW cov: 11277 ft: 16410 corp: 7/193b lim: 32 exec/s: 84 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:29.708 [2024-12-05 20:29:23.090640] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=306 offset=0x6000000000000000 prot=0x3: Invalid argument 00:09:29.708 [2024-12-05 20:29:23.090666] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6000000000000000 flags=0x3: Invalid argument 00:09:29.708 [2024-12-05 20:29:23.090677] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:09:29.708 [2024-12-05 20:29:23.090694] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:29.708 [2024-12-05 20:29:23.091649] vfio_user.c:3141:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:09:29.708 [2024-12-05 20:29:23.091668] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:09:29.708 [2024-12-05 20:29:23.091684] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:09:29.966 #85 NEW cov: 11277 ft: 16755 corp: 8/225b lim: 32 exec/s: 85 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:29.966 #86 NEW cov: 11284 ft: 17086 corp: 9/257b lim: 32 exec/s: 86 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:09:30.223 #87 NEW cov: 11284 ft: 17264 corp: 10/289b lim: 32 exec/s: 43 rss: 76Mb L: 32/32 MS: 1 ChangeByte- 00:09:30.223 #87 DONE cov: 11284 ft: 17264 corp: 10/289b lim: 32 exec/s: 43 rss: 76Mb 00:09:30.224 Done 87 runs in 2 second(s) 00:09:30.224 [2024-12-05 20:29:23.596965] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:09:30.481 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:09:30.482 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:30.482 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:30.482 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:30.482 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:09:30.482 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:30.482 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:30.482 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:30.482 20:29:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:09:30.482 [2024-12-05 20:29:23.888436] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:30.482 [2024-12-05 20:29:23.888514] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850449 ] 00:09:30.740 [2024-12-05 20:29:23.971696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.740 [2024-12-05 20:29:24.019543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.997 INFO: Running with entropic power schedule (0xFF, 100). 00:09:30.997 INFO: Seed: 3466914818 00:09:30.997 INFO: Loaded 1 modules (387669 inline 8-bit counters): 387669 [0x2c40c8c, 0x2c9f6e1), 00:09:30.997 INFO: Loaded 1 PC tables (387669 PCs): 387669 [0x2c9f6e8,0x3289c38), 00:09:30.997 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:30.997 INFO: A corpus is not provided, starting from an empty corpus 00:09:30.997 #2 INITED exec/s: 0 rss: 67Mb 00:09:30.997 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:30.997 This may also happen if the target rejected all inputs we tried so far 00:09:30.997 [2024-12-05 20:29:24.274020] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:09:30.997 [2024-12-05 20:29:24.345994] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:30.997 [2024-12-05 20:29:24.346037] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:31.512 NEW_FUNC[1/678]: 0x43df08 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:09:31.512 NEW_FUNC[2/678]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:31.512 #212 NEW cov: 11246 ft: 11191 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 5 CrossOver-InsertRepeatedBytes-ShuffleBytes-CrossOver-InsertByte- 00:09:31.512 [2024-12-05 20:29:24.827316] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:31.512 [2024-12-05 20:29:24.827368] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:31.512 #213 NEW cov: 11260 ft: 14396 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeByte- 00:09:31.773 [2024-12-05 20:29:25.020159] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:31.773 [2024-12-05 20:29:25.020192] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:31.773 NEW_FUNC[1/1]: 0x1c208a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:31.773 #214 NEW cov: 11277 ft: 16224 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:09:32.030 [2024-12-05 20:29:25.210755] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:32.030 [2024-12-05 20:29:25.210787] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:32.030 #215 NEW cov: 11277 ft: 16689 corp: 5/53b lim: 13 exec/s: 215 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:09:32.030 [2024-12-05 20:29:25.391157] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:32.030 [2024-12-05 20:29:25.391186] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:32.286 #216 NEW cov: 11277 ft: 17317 corp: 6/66b lim: 13 exec/s: 216 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:09:32.286 [2024-12-05 20:29:25.580100] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:32.286 [2024-12-05 20:29:25.580132] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:32.286 #217 NEW cov: 11277 ft: 17592 corp: 7/79b lim: 13 exec/s: 217 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:09:32.543 [2024-12-05 20:29:25.760528] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:32.543 [2024-12-05 20:29:25.760562] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:32.543 #218 NEW cov: 11277 ft: 17704 corp: 8/92b lim: 13 exec/s: 218 rss: 76Mb L: 13/13 MS: 1 ChangeBinInt- 00:09:32.543 [2024-12-05 20:29:25.939871] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:32.543 [2024-12-05 20:29:25.939906] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:32.799 #219 NEW cov: 11284 ft: 17766 corp: 9/105b lim: 13 exec/s: 219 rss: 76Mb L: 13/13 MS: 1 ChangeByte- 00:09:32.799 [2024-12-05 20:29:26.119539] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:32.799 [2024-12-05 20:29:26.119572] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:32.799 #220 NEW cov: 11284 ft: 18061 corp: 10/118b lim: 13 exec/s: 220 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:09:33.056 [2024-12-05 20:29:26.297171] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:33.056 [2024-12-05 20:29:26.297203] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:33.056 #221 NEW cov: 11284 ft: 18136 corp: 11/131b lim: 13 exec/s: 110 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:09:33.056 #221 DONE cov: 11284 ft: 18136 corp: 11/131b lim: 13 exec/s: 110 rss: 77Mb 00:09:33.056 Done 221 runs in 2 second(s) 00:09:33.056 [2024-12-05 20:29:26.426974] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:33.313 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:09:33.314 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:33.314 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:33.314 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:33.314 20:29:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:09:33.314 [2024-12-05 20:29:26.721204] Starting SPDK v25.01-pre git sha1 2c140f58f / DPDK 24.03.0 initialization... 00:09:33.314 [2024-12-05 20:29:26.721302] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850825 ] 00:09:33.571 [2024-12-05 20:29:26.804558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.571 [2024-12-05 20:29:26.851238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.834 INFO: Running with entropic power schedule (0xFF, 100). 00:09:33.834 INFO: Seed: 2002952846 00:09:33.834 INFO: Loaded 1 modules (387669 inline 8-bit counters): 387669 [0x2c40c8c, 0x2c9f6e1), 00:09:33.834 INFO: Loaded 1 PC tables (387669 PCs): 387669 [0x2c9f6e8,0x3289c38), 00:09:33.834 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:33.834 INFO: A corpus is not provided, starting from an empty corpus 00:09:33.834 #2 INITED exec/s: 0 rss: 68Mb 00:09:33.834 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:33.834 This may also happen if the target rejected all inputs we tried so far 00:09:33.834 [2024-12-05 20:29:27.113239] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:09:33.834 [2024-12-05 20:29:27.158840] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:33.834 [2024-12-05 20:29:27.158874] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:34.349 NEW_FUNC[1/673]: 0x43ebf8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:09:34.349 NEW_FUNC[2/673]: 0x441118 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:34.349 #99 NEW cov: 11188 ft: 11203 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:34.349 [2024-12-05 20:29:27.648938] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:34.349 [2024-12-05 20:29:27.648989] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:34.349 NEW_FUNC[1/5]: 0x15c9dd8 in post_completion /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1770 00:09:34.350 NEW_FUNC[2/5]: 0x15d21c8 in cq_is_full /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1741 00:09:34.350 #110 NEW cov: 11248 ft: 14440 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:09:34.607 [2024-12-05 20:29:27.837556] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:34.607 [2024-12-05 20:29:27.837590] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:34.607 NEW_FUNC[1/1]: 0x1c208a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:34.607 #113 NEW cov: 11265 ft: 15369 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 3 ChangeByte-CrossOver-CMP- DE: "y\000\000\000\000\000\000\000"- 00:09:34.607 [2024-12-05 20:29:28.037759] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:34.607 [2024-12-05 20:29:28.037799] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:34.865 #114 NEW cov: 11265 ft: 15884 corp: 5/37b lim: 9 exec/s: 114 rss: 76Mb L: 9/9 MS: 1 ChangeByte- 00:09:34.865 [2024-12-05 20:29:28.218672] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:34.865 [2024-12-05 20:29:28.218703] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:35.124 #116 NEW cov: 11265 ft: 16546 corp: 6/46b lim: 9 exec/s: 116 rss: 76Mb L: 9/9 MS: 2 EraseBytes-CopyPart- 00:09:35.124 [2024-12-05 20:29:28.398313] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:35.124 [2024-12-05 20:29:28.398349] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:35.124 #117 NEW cov: 11265 ft: 17505 corp: 7/55b lim: 9 exec/s: 117 rss: 76Mb L: 9/9 MS: 1 PersAutoDict- DE: "y\000\000\000\000\000\000\000"- 00:09:35.382 [2024-12-05 20:29:28.589963] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:35.382 [2024-12-05 20:29:28.589994] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:35.382 #118 NEW cov: 11265 ft: 18326 corp: 8/64b lim: 9 exec/s: 118 rss: 76Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:35.382 [2024-12-05 20:29:28.782910] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:35.382 [2024-12-05 20:29:28.782940] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:35.641 #119 NEW cov: 11272 ft: 18504 corp: 9/73b lim: 9 exec/s: 119 rss: 76Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\002"- 00:09:35.641 [2024-12-05 20:29:28.973642] vfio_user.c:3143:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:35.641 [2024-12-05 20:29:28.973670] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:35.900 #120 NEW cov: 11272 ft: 18693 corp: 10/82b lim: 9 exec/s: 60 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:35.900 #120 DONE cov: 11272 ft: 18693 corp: 10/82b lim: 9 exec/s: 60 rss: 76Mb 00:09:35.900 ###### Recommended dictionary. ###### 00:09:35.900 "y\000\000\000\000\000\000\000" # Uses: 1 00:09:35.900 "\001\000\000\002" # Uses: 0 00:09:35.900 ###### End of recommended dictionary. ###### 00:09:35.900 Done 120 runs in 2 second(s) 00:09:35.900 [2024-12-05 20:29:29.111958] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:09:36.160 20:29:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:09:36.160 20:29:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:36.160 20:29:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:36.160 20:29:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:09:36.160 00:09:36.160 real 0m19.633s 00:09:36.160 user 0m27.462s 00:09:36.160 sys 0m1.996s 00:09:36.160 20:29:29 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.160 20:29:29 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:36.160 ************************************ 00:09:36.160 END TEST vfio_llvm_fuzz 00:09:36.160 ************************************ 00:09:36.160 00:09:36.160 real 1m24.737s 00:09:36.160 user 2m7.683s 00:09:36.160 sys 0m10.280s 00:09:36.160 20:29:29 llvm_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.160 20:29:29 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:36.160 ************************************ 00:09:36.160 END TEST llvm_fuzz 00:09:36.160 ************************************ 00:09:36.160 20:29:29 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:09:36.160 20:29:29 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:09:36.160 20:29:29 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:09:36.160 20:29:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.160 20:29:29 -- common/autotest_common.sh@10 -- # set +x 00:09:36.160 20:29:29 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:09:36.160 20:29:29 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:09:36.160 20:29:29 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:09:36.160 20:29:29 -- common/autotest_common.sh@10 -- # set +x 00:09:41.437 INFO: APP EXITING 00:09:41.437 INFO: killing all VMs 00:09:41.437 INFO: killing vhost app 00:09:41.437 WARN: no vhost pid file found 00:09:41.437 INFO: EXIT DONE 00:09:44.728 Waiting for block devices as requested 00:09:44.728 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:09:44.728 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:09:44.728 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:44.728 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:44.728 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:44.988 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:44.988 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:44.988 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:45.248 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:45.248 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:45.248 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:09:45.508 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:45.508 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:45.508 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:45.767 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:45.767 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:45.767 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:46.025 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:46.025 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:50.212 Cleaning 00:09:50.212 Removing: /dev/shm/spdk_tgt_trace.pid1828363 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1827823 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1828363 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1828866 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1829903 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1830243 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1831118 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1831124 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1831476 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1831725 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1831966 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1832226 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1832472 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1832675 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1832876 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1833114 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1833716 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1836157 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1836373 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1836579 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1836663 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1837153 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1837165 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1837572 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1837699 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1837962 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1837972 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1838182 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1838188 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1838659 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1838858 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1839058 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1839218 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1839732 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1840098 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1840473 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1840841 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1841213 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1841578 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1841952 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1842327 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1842697 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1842974 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1843272 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1843649 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1844014 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1844388 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1844753 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1845051 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1845332 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1845697 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1846071 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1846436 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1846808 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1847139 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1847397 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1847747 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1848126 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1848590 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1848956 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1849331 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1849707 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1850077 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1850449 00:09:50.212 Removing: /var/run/dpdk/spdk_pid1850825 00:09:50.212 Clean 00:09:50.212 20:29:43 -- common/autotest_common.sh@1453 -- # return 0 00:09:50.212 20:29:43 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:09:50.212 20:29:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.212 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:09:50.212 20:29:43 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:09:50.212 20:29:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.213 20:29:43 -- common/autotest_common.sh@10 -- # set +x 00:09:50.213 20:29:43 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:50.213 20:29:43 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:50.213 20:29:43 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:50.213 20:29:43 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:09:50.213 20:29:43 -- spdk/autotest.sh@398 -- # hostname 00:09:50.213 20:29:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-29 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:09:50.213 geninfo: WARNING: invalid characters removed from testname! 00:09:54.395 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:09:59.664 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:10:01.039 20:29:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:09.250 20:30:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:14.517 20:30:07 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:21.075 20:30:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:25.255 20:30:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:31.809 20:30:24 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:10:37.072 20:30:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:10:37.072 20:30:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:10:37.072 20:30:29 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:10:37.072 20:30:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:37.072 20:30:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:10:37.072 20:30:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:37.072 + [[ -n 1726199 ]] 00:10:37.072 + sudo kill 1726199 00:10:37.082 [Pipeline] } 00:10:37.101 [Pipeline] // stage 00:10:37.110 [Pipeline] } 00:10:37.125 [Pipeline] // timeout 00:10:37.132 [Pipeline] } 00:10:37.146 [Pipeline] // catchError 00:10:37.151 [Pipeline] } 00:10:37.166 [Pipeline] // wrap 00:10:37.173 [Pipeline] } 00:10:37.185 [Pipeline] // catchError 00:10:37.193 [Pipeline] stage 00:10:37.195 [Pipeline] { (Epilogue) 00:10:37.215 [Pipeline] catchError 00:10:37.217 [Pipeline] { 00:10:37.229 [Pipeline] echo 00:10:37.231 Cleanup processes 00:10:37.238 [Pipeline] sh 00:10:37.525 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:37.525 1857986 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:37.541 [Pipeline] sh 00:10:37.827 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:37.827 ++ grep -v 'sudo pgrep' 00:10:37.827 ++ awk '{print $1}' 00:10:37.827 + sudo kill -9 00:10:37.827 + true 00:10:37.840 [Pipeline] sh 00:10:38.124 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:50.341 [Pipeline] sh 00:10:50.626 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:50.626 Artifacts sizes are good 00:10:50.641 [Pipeline] archiveArtifacts 00:10:50.649 Archiving artifacts 00:10:50.782 [Pipeline] sh 00:10:51.129 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:51.151 [Pipeline] cleanWs 00:10:51.161 [WS-CLEANUP] Deleting project workspace... 00:10:51.161 [WS-CLEANUP] Deferred wipeout is used... 00:10:51.168 [WS-CLEANUP] done 00:10:51.171 [Pipeline] } 00:10:51.189 [Pipeline] // catchError 00:10:51.201 [Pipeline] sh 00:10:51.482 + logger -p user.info -t JENKINS-CI 00:10:51.491 [Pipeline] } 00:10:51.505 [Pipeline] // stage 00:10:51.511 [Pipeline] } 00:10:51.526 [Pipeline] // node 00:10:51.531 [Pipeline] End of Pipeline 00:10:51.604 Finished: SUCCESS